Venus Laowa 15mm f/4.5 Zero-D Shift W-Dreamer FFS Review

Months after it came out, there is nothing really out there on this lens except for some stock factory pictures and writeups that are predominantly plagiarisms of promotional materials, “why don’t you get to the point” videos, and vapid clickbait reposts of said videos. There are a couple of decent reviews, but I don’t feel like they were really pushing the lens.

So in the Machine Planet tradition of going off half-cocked, I will give you the dirt on this after spending a day shooting the Nikon F version of this in -3º C weather with a Leica Monochrom Typ 246. No need to start simple, or even with the camera body on which this lens (ostensibly) was intended to mount.

A Typ 246 is an all-monochrome, FX, 24mp Leica mirrorless body that can shoot to 50,000 ISO without looking even as grainy as Tri-X. It has a short flange distance, which means that virtually any SLR lens can be adapted to it. It has pattern, off-the-sensor metering, so there is no messing around with exposure compensation or trying to figure out why shift lenses underexpose on Nikon F100s and overexpose somewhat on the F4 (yes, this is true). It also has an inbuilt 2-axis level that you can see in its EVF, a welcome aid when it is cold outside. These features mean that you can use a shift lens handheld. This lens is a ~22mm equivalent on APS-C (DX) nd I believe a ~30mm equivalent on Micro Four-Thirds. This probably is not a lens for MFT, since it is absolutely massive on any MFT body. In fact, it seems really big for a Sony Alpha body…

The physical plant

The first thing you ask yourself about this lens is, “how could a lens out of China possibly cost $1,199?” But this is a shallow (if not also culturally chauvinistic) observation. Your iPhone is made in China, and there is nothing wrong with its lenses. Or apparently, you iPhone’s price. Venus is something of a newcomer in the camera lens market, and it uses the designator “Laowa,” which is a reference to frogs in a well (not kidding… check out the Facebook page). The idea, they say, is to look up at the sky and keep dreaming. That, of course, is possible where the cost of manufacturing a zillion-element, double-aspherical lens is relatively low. The front ring reads “FF S 15mm F4.5 W-Dreamer No. xxxx.” FFS of course stands for “Full-Frame Shift.”

The 15/4.5 lens is available in a variety of mounts. Word to the wise: get the Nikon F or Canon EF version. Nikon has the longest flange-to-focal distance at 46.5mm, meaning that it has the shortest rear barrel, meaning the maximum compatibility with mount adapters (with simple adapters you can go from Nikon to any mirrorless camera, including Fuji GFX). Canon EF is a close second at 44mm. If you have an existing Canon or Nikon system, just take your pick. Your worst choice is buying this lens in a mirrorless version (Canon RF, Nikon Z, or Sony FE), since you will end up locked into one platform exclusively. Remember that this lens has no electronics or couplings, so adapting it is just a matter of tubes.

The lens comes packed in a very workmanlike white box, just like $50 Neewer wide-aperture lenses for Sony E cameras. This is a mild surprise, but nobody maintains an interest in packaging for very long after a lens comes in. Nikon lenses, after all, come in pulpboard packaging that strongly resembles the egg cartons your kids might give to their hamsters as chew toys. The instructions end with the wisdom, “New Idea. New Fun.” And that is very on-point: for most people, photography is about fun.

The lens is a monster, and it’s not lightweight. It feels at home one something at least the size of a Nikon F4 (and balances well on one, btw). On an M camera, you need to employ the Leica Multifunction Grip (or something similar) to effectively hold onto the camera (this combo can still break your wrist…). Weight as ready-to-mount on a Leica is 740g. For comparison, a Summilux 75 (the original gangster heavyweight for M bodies) is 634g. An 18/3.5 Zeiss ZM Distagon is 351g.

The front element is bulbous. And you must remember that you cannot simply set the camera nose down, since (1) the glass sticks out, (2) there is no filter protecting it from damage from the surface the lens rests on, and (3) this is a really expensive lens. This is also a lens whose lens cap you cannot, must not, ever, lose. It is solid, pretty, bayonets on, and probably can’t be replaced. It is not clear why – if you can mount a 100mm filter holder to the front of this lens – that such a holder is not simply built into the lens – if for no other reason than protecting the front element.

I mounted mine with a Novoflex LEM/NIK adapter, which is pretty much the only dimensionally accurate anything-to-M adapter. Proper registration is a big deal because a 15mm lens cell has very little travel from zero to infinity.

The Novoflex’s stepped interior suggests a place to stick a filter — since the lens has no front filter threads — but for reasons discussed below, this is not a big deal. And in the back of the lens, it’s gel filters – or nothing.

Controls/handling

First, this lens is easy to handle wearing gloves. Which, given the temperature yesterday, was fortunate.

This is a little bit different from a traditional PC lens, on which turning a knob would make the shift. The Venus has a third lens ring – behind the focus (front) and aperture (middle). This is different from a Nikon PC lens, for example, where the aperture is front and focus is rear.

The shift ring cams the lens back and forth along the direction of shift, 11mm in either direction. You would think this would interfere with focusing or using the aperture ring, but in reality, it’s likely the only ring you would be moving on a shot-to-shot basis. This lens has such staggering depth of field that you will put this roughly on ∞ and forget about the rest, and you will probably turn it to f/8 and leave it there. Shift is locked with a knob that looks like the knob Nikon uses to shift the lens.

There is a small tab that locks the rotation of the shift mechanism, which can be set to 0 for horizontal pictures, 90 or 270 for verticals, and 180 if you are strange. It moves in 15-degree increments. A 28/3.5 PC-Nikkor does not have a lock, which occasionally can make things exciting if you start framing and realize that your shift is now 45 degrees from vertical (or horizontal).

The aperture ring has light clicks and is logrithmic (unfortunately) – each stop at the wide end is the roughly the same amount of movement, but things do bunch up at f/11, f/16, and f/22. It’s puzzling in this price range.

The focus ring has a short throw, infinity to 1m being about 1cm of travel. Set it and forget it. If you’re looking at pictures on the net and wondering why the focusing scale makes it look like the lens focuses “past” infinity, it’s a mystery.

  • At the hard stop and no shift, the lens is indeed focused at infinity. But the scale is off.
  • At the hard stop and shifted, the focus is still correct at infinity.

I verified optical focus at the stop both on a Nikon F4 with an adapted red-dot R screen (grid/split prism/f<3.5), the Nikon F4’s phase-detection AF sensor, and with the Leica.

To understand the strangeness of the Venus focusing ring, consider that in an old-school, manual focus lens, you typically have three things in synch for “infinity.”

i. The lens is at its physical stop, meaning you can’t turn the focusing ring to make the optical unit get closer to the imaging surface. This is normally an inbuilt limitation. It is not typically a critical tolerance on a lens due to the two adjustments below.

ii. The lens is optically focused at infinity, meaning that an infinitely distant object is in-focus on the imaging surface. This is usually a matter of shimming the optical unit or in some lenses or using a similar adjustment for forward/backward position of the optical unit.

iii. The focusing scale reads ∞. In the old days, this was simply a matter of undoing three setscrews, lining up the ∞ mark with the focus pointer, and then tightening the screws. If you are a super-precise operator like Leica, your lens stop/focusing ring/scale are made as one piece and so precisely that no separately applied focusing scale is required.

When a manufacturer of modern autofocus lenses (or even high-performance manual telephotos) is confronted with design constraints, it generally omits the relationship (i), the physical stop, and (iii) the infinity mark. It will do this on telephotos (like the 300/4.5 ED-IF Nikkor) because heat-related expansion might otherwise prevent a telephoto from actually focusing on a distant object. With AF lenses, hard stops are not the best for the fallback “hunting” mode — and with the user relying heavily on AF anyway, there is no need to inject another thing to check in QC. By the way, on a lot of AF lenses, the focus scale is basically just taped on – eliminating the setscrews.

Cheaper lenses, like the Neewer I-got-drunk-and-bought-it-on-Ebay specials, don’t really couple any of these things precisely. The stop is set so that you can optically focus past infinity and yet when the lens is optically focused at infinity, the focus scale might read somewhere between 10m and the left lobe of ∞.

For reasons that are frankly baffling, Venus uses a different idea entirely, which is to match the collimation and the stop – the hard part – and yet to omit matching the focusing scale. This provides no ascertainable benefit unless the focusing ring is not just a ring but an integral part of the focusing mechanism. I don’t see any setscrews, so maybe this is the explanation. And really, something in this price range should have things line up, even if it means adding one more cosmetic part to make the focusing scale adjustable.

On the surface, this design choice is frustrating to perfectionists and degrades the value of the focusing scale. That said, in 99% of pictures you take with this lens, you’re going to set it to the hard stop and get more than sufficient depth of field for close objects just by virtue of stopping the lens down.

If you are reading this, Venus, the focus scale design needs to be fixed.

Shooting

There was nothing remarkable about shooting this lens, which is a good thing. As long as you realize it has no electronic connections or mechanical control linkages to the camera it… works like any Leica lens.

They used to advise that PC lenses had to be used on tripods. That was true when (1) cameras did not have inbuilt electronic levels and most did not have grid focusing screens, (2) viewfinders blacked out at small apertures and with shift, and (3) through the lens meters freaked out at the vignetting.

None of those conditions exist with mirrorless cameras, where viewing is off the sensor, focusing is by peaking, and signal amplification makes it possible to frame a picture even closed down to f/16. On the Leica Monochrom, for example, it is very easy to use this lens – no different from using any other with the EVF. The M typ 240 series cameras have inbuilt levels that are visible through the EVF; the later M10s do too. A visible level is absolutely essential if you are going to shoot this (or any shift lens) handled.

Speaking of the sky, the sweep of this lens, its vignetting, and its self-polarization mean that in many pictures, the sky will be darker than you expect. Most people will not mind. I suppose you could mount a 100mm filter to the front or a gel in the back, but this is highly dependent on what you are trying to do, your tolerance for the expense, and the light response of your camera.

One thing you begin to realize is that if you switch from a 28mm PC lens to a monster 15mm PC lens, you go from shifting exclusively up to avoid converging parallels – to also shifting down to cut down on excessive sky. You might think of the shift as the “horizon control” adjustment. The challenge is, at the end of the day, that this is still a 15mm lens with a super-wide field. Unlike a 28 or 35, you need to think about both the top and the bottom of the picture.

One other thing you will see in a couple of the pictures in the article is that a slight forward tilt of the camera can make things look slightly bigger than they should at the top. This is user error and the unintended opposite of converging parallels.

With wide lenses, you need to watch 3 axes of alignment – left/right tilt, front/back tilt, and critically, parallelism to the subject. This last point can be a major irritation with this lens since cameras don’t typically have live indications of whether you are square with the subject.

Sharpness

Note: WordPress scales pictures down and not in a flattering way; if you want pixel-level sharpness comparisons to other lenses, there are other reviews out there that do that.

The jury is still out here – at least until I get a sunny day and hook this up to an A7r ii, which is more representative of cameras most people would use with this lens. But the foreman is asking some of the right questions for the verdict we want. Field curvature is also something that needs more exploration. As it stands, though, the lens seems to be more than sharp enough for its intended purpose.

All wide-angle lenses have degradation toward the edges of the frame. Many cameras don’t have the resolution to make it obvious, but this is a well-known reality. Shift lenses have a bigger image circle, which gives them comparable performance (not stellar, but comparable) performance to normal lenses over a wide area. They are “average,” but average in the sense that they are reasonably sharp over the whole frame, not super-sharp in the center and falling apart at the edges.

Put another way, a shift lens for 35mm is essentially a medium-format lens. Medium format lenses do not have the highest resolution – because they don’t have to. But they do deliver their performance over a wider field. But by shifting the lens, you are bringing lower-performing edges of the field into the 35mm frame.

But… you protest… my AIA book has all of these perfect architectural pictures of xyz buildings.

No, it does not. First, they are tiny, and that with the halftone screens, they give off an impression of being much sharper than they are in reality. Second, if you look at an original print closeup – pixel-peeping on prints was never normal when people made prints – you’ll see that the pointy top of that building is fuzzy because someone used a 4×5 or 8×10 camera and shifted it to accommodate the tall object in the picture. But seeing it in a gallery or an exhibit, you would (i) be standing back from it and (2) paying attention to the center of the frame, which is where most pictorial interest is. That pointy top is in your peripheral, not central, vision. The central part still has adequate performance for the purpose.

For this reason, the sharpness of a shift lens can only really be understood in terms of shift lenses or shifted medium- or large-format lenses: if you leave a little sky above a tall building, you don’t have to confront so much the inevitable performance falloff in those last couple of mm of the frame. All shift lenses have this issue, and it goes both to illumination and sharpness. Go to maximum shift on anything, and you can expect image degradation at a pixel-peeping level in the top third of the image.

So what? This is the same thing that people with shift-capable cameras have faced since… forever.

And why do shift lenses exist? The answer is pretty simple; it’s easier to get to a good result than many types of post-correction. If you plan to do post-correction, you have to use a much wider lens than you normally would, you have to crop (because tilting an image in post makes the field a trapezoid that must be rectified), and you have to have an accurate measurement of the scale of the original object. On this last point, if you don’t know the XY proportions of a building’s windows, perspective-correcting it in post-processing will result in awkward proportions. So if you have a 42mp image that needs serious correction to make a tall building upright and correctly proportioned, you may end up with less than 20mp of image by the time the process is over. And since tilting magnifies the top edge of the image, you are magnifying lens aberrations in the process.

Post-correcting does have one advantage, though, which is that you can use a lens that performs highly across the frame. I do it a bit with the Fujinon SWS 50mm f/5.6 on a 6×9 camera: when you are working with a wide lens, from a 96mp scan, you have plenty of resolution to burn in fixing one degree of inclination. This is not so much the case with a 35mm lens on a 35mm body.

As of this writing, Leica just announced in-camera tilt correction for its 40mp M series cameras. This is an idea long overdue, since the camera knows what lens is mounted (or can be told) and the inclinations at the time of the shot. s.

You don’t escape post-processing with shift lenses, particularly when you have to fix skew between the image plane and the subject (rotation around the vertical axis of your body). PC lenses also have distortion to contend with, and simple spherical distortion sometimes seems less simple when the “sphere” is in the top half of the frame. But the corrective action is far, far milder.

The complication with digital and shift lenses is diffraction. With a shift lens, you need a small aperture to even out the illumination and sharpness, but that small aperture cannot be smaller than the diffraction limit without degrading sharpness overall. That’s f/11 on a Leica M246 and roughly f/8 on a Leica M10 or a Sony A7r II or A7r III series camera.

A further complication with all shift optics is dust. Small apertures, smaller than f/5.6, tend to show dust on the sensor. Shift optics have at least one extra place for dust to get into the camera body (the interface where the shift mechanism slides the two halves of the frame).

Sharpness seems to peak at around f/8 on the Venus, which is not surprising. The sharpness itself is good as well as consistent until the very margins of a shifted frame; I did not need to turn on sharpening on Lightroom. As with all lenses, apparent sharpness is higher on closer objects – because their details are bigger in the image, pixel-level aberrations are not as apparent.

Distortion

The goal is “Zero-D(istortion).” The lens gets close – and better than most SLR lenses in this range, and certainly better than a lot of SLR PC lenses – but not completely distortion-free. Unshifted, it looks like a relatively mild +2 in Lightroom (the shot above is uncorrected except for slight horizon tilt). Shifted might be a little tougher to correct, but you can either create a preset for Lightroom or use some of the more advanced tools in Photoshop.

Flare

Yes. It has flare when light hits it wrong. Check out the picture above. Sometimes it works. Sometimes it is an irritation. Luckily, it does not seem to happen very often,

Value Proposition

There is a real tendency to abuse superwides in photography today, usually to disastrous effect due to the inability of photographers to properly compose pictures. Companies like Cosina/Voigtlander have fed into this, as has Venus, with about a dozen high-performing superwide lenses that would have seemed impossible just a few years ago. “Wide” used to mean 35mm; now “wide” tends to mean 24mm, and “superwide” is below 15mm. The Venus has all of the vices of a wide-angle lens, notably posing the question, “what do I do with all this foreground?”

By the same token, shift lenses are very specialized tools. Old-school shift lenses were the least automated lenses in their respective SLR lines; new ones are marginally more automated (mainly having automatic apertures), but they are staggeringly expensive.

The Venus somehow manages to combine the best and worst of all of this. You cannot argue with the optical performance as a shift lens, but the lack of automation (and frankly, ease of use) makes it just as miserable to use on a native SLR body as any old-school shift lens was. You’ll note where people complain about this lens in reviews, that’s what they complain about. I’m not sure that merits much sympathy; you know what you signed up for. What makes the Venus more fun is that it connects to mirrorless bodies that, by virtue of their EVFs, remove a lot of the irritation that would occur using the lens on a traditional SLR body.

Whether you will always be shooting 30-story buildings from 200m away is a matter of your own predilections, and that might be the deciding factor. Unless you are really good with wide-angle shots – or are a real-estate photographer in Hong Kong, you may not have a very solid (or at least somewhat economically viable) use case. But in reality, the market is not driven by professional needs. If it were, the only things that would ever be sold would be full-frame DSLRs, superfast 50mms, and the “most unique wedding I’ve ever seen” presets package for Lightroom.

Bottom Line

Pros: solid build quality, clever shift mechanism, wide angle of view,* reasonably low distortion, actually collimated correctly for its native mount.

Cons: non-linear aperture control,** odd (incorrect?) focus scale calibration,** facilitation of compositional errors you never previously imagined possible,* bulbous front element, no inbuilt filter capability, and a lens cap that only mounts one way.

*Qualities that would be inherent to any lens this wide with shift capability.

**Qualities that do not typically belong on lenses in this price range.

Adobe face recognition: beat the system?

The Kobayashi Maru test is not a test of character unless you see the world in terms of “go down in dignity with the [star]ship” or “be a coward.” Or whatever Nick Meyer thought the outcomes would be. Captain Kirk won the test by not accepting a binary decision tree. This is exactly how you should approach any problem that looks like it is unwinnable. Rewrite the simulation. Use a screwdriver as a chisel.

One of the ways you can do this is to ignore the process as presented completely, decide your goal state, and then selectively use whatever is available to get there. Face recognition is exactly such an exercise. Adobe would have you select one of two suboptimal tools (Lightroom Classic or Lightroom) and have you build out the recognition process and leave it in the platform where it started.

Not believing in the no-win scen-ah-ri-o (sorry, Shatner), I started with the first principle:

What is the purpose of face recognition in photos?

This is actually a really good question. The way the process on Lightroom proceeds (either version), you think the purpose is to name every person in every photo and know what precise face goes with every name. This view assumes that you are a photojournalist who needs to capture stuff. You will go bat crazy trying to achieve this goal if your back catalog is hundreds of thousands of pictures and you use Lightroom Classic (“Classic”) as your primary tool.

Let’s face it – you are (at least this year) a work-at-home salary man, not Gene Capa. The real utility of face recognition is to pull up all pictures of someone you actually care about. You need it for a funeral. For a birthday party. For blackmail.

That does not actually require you to identify precise faces, just to know that one face in the picture is the one you want. You already know this person’s name and how the person looks. And even if you didn’t remember, a collection of pictures of that person – no matter who else was in or out of the shot – would have one subject in common. You would know within a few pictures who John Smith was.

Taking this view, a face identification is just another keyword.

It’s not even 100% clear that you would ever need it done in advance, on spec, or before you had a real need to use it.

What do we know about face recognition in LrC vs LR?

Our statement of problem: 250,000 images of various people, some memorable and some not. I want to get to being able to pull up all pictures of John, Joe, Jane, or Bill. And I want this capability to last longer than my patience with Lightroom cloud. I want to be able to ditch Lightroom, even Classic, one day and change platforms without losing my work.

When you are figuring out a work flow, or trying to, it’s helpful to consider what your tools can and cannot do; hence, with Classic and Cloud, start breaking down the capabilities.

  • Both recognize faces with rudimentary training.
  • Cloud is much faster than Classic and tends to have fewer false hits (due to Sensei)
  • Both can do face recognition within a subset of photos.
  • Classic can an apply keywords to images that Cloud can see/
  • Cloud cannot create keywords that Classic can see.
  • LrC has better keyword capabilities, period.
  • You can make an album in Cloud and have it (and its contents) show up as a collection in Classic.
  • You can put things in one of these items in either program and have it show up in the other.

Do these suggest anything? No? Let’s step through.

Step-by-Step

Let’s talk about some preliminaries that no one ever seems to address.

Order of operations. If you are starting from zero, you should identify faces in the import every time you import something. Not only are names of near-strangers fresher in your mind, it also prevents the kind of effort we are about to explore.

What’s my name? You must have a naming convention and a normalized list of names. It doesn’t matter whether you pick someone’s nickname, real name, married name, whatever. Whatever you decide for a person must be treated consistently. Is my name Machine Planet? Planet Machine? PlanetMachine? This has implications for Classic, where you can’t simply type a two-word name (Bill Jones) into the text search box without getting everyone named Bill and everyone named Jones. For that you might want to concatenate both names together (unless you want to use keywords in the hierarchical filters). In Cloud, the program can sort by first and last name, so there is value in leaving these separate.

Stay in the moment. Although you might be tempted to run learning against every single picture you have at once, this leads to a congested Faces view (or People view), slow recalculation on Classic and a lot of frustration. Do a day or a week at a time. Or an event. This will give you far fewer faces from which to choose, and fewer faces to identify. Likewise, if there is a large group picture in the set, focus your effort on tagging everyone in it. This will set up any additional Identified People in Classic and will kickstart Cloud.

Who’s your friend? You next need to decide who is worth doing a lot of work to ID. You are not going to do iterative identification (especially on Classic) with people you don’t care about. Leave their faces unidentified. Or better yet, delete the face zones. This is a very small amount of effort in a 200-shot session or a 36-shot roll of scanned pictures.

Start in Cloud. This part is not intuitive at all. Go ahead and sync (do not migrate!) all your pictures to Lightroom mobile. This consumes no storage space on the Adobe plan. If there are a lot that have no humans, use a program like Excire Search to detect pictures with at least one face pointed at the camera. This is a reasonable cut, since there are few pictures you would bother tagging that have one face, solely in profile.

The synch process will take forever. I don’t think there is a lot of point in preserving the Classic folder structure when you do this; I would just make a collection like “Color 2000-2010” in the Classic synched collections and dump your targets into that (n.b. a collection in Classic is just an alias to your pictures; making a collection does not change the folder arrangement on your computer). We are only using Cloud for face recognition; its foldering is too rudimentary and inflexible to be useful – although right-clicking in Classic to make folders (or groups of folders) into synched folders will let you adopt the Classic organization in Cloud, albeit flattened, without re-synching. Again, not very useful. Also, for reasons described further on, you want to have a relatively clean folder panel in Cloud because you will be making some albums, and you don’t need extra clutter.

Ok. Let the synch run its course, or start your identification work on Cloud as it goes. Cloud will start aggregating what it thinks is the same face into face groups, which you then must name. Start naming these according to the convention you chose. I would put the People view to sort by “count,” which naturally puts the most important people at the top (you have the most pictures of them). Let’s say you name one face group “John Smith.”

Crossing over

The process so far is pretty generic. To start crossing things over to Classic, you need to make folders (“albums”) in Cloud. Start with one per important person (“___ John Smith”). Search for that person. Dump the search results into the album. You can always add more later.

Now flip back to Classic. You will see collections under “From Lightroom.” Voilà! One of them is “John Smith.”

Now you can do one of two things.

You can simply make a quick check to make sure there are no pictures included that obviously are not John Smith. But after you do that, or not, you can mass-keyword everything in that collection “John Smith.” If you named John Smith consistently with any pre-existing Classic face identification of John Smith (i.e., not two different variations of the name), your searches will now have the benefit of both tools. Save those keywords down to the JPG/TIFF files (Control-S/Command-S) or XML files (same), and you will forever have them, regardless of whether you leave the Adobe infrastructure. In fact, many computer-level file indexes can find JPGs and TIFFs by embedded keywords (which the index sees as text).

Congratulations. Now you’ve highjacked Sensei into doing the dirty work on Classic.

With a small but not overwhelming amount of creativity, you could use a technique like this to cross-check your past Classic calls.

STOP HERE AND GO TO “CALIBRATING YOUR EFFORTS” UNLESS YOU ARE A MASOCHIST

Second, if you’ve missed your OCD meds, you can also use the results of this to inform your Classic face-recognition process.

a. Select this “From Lightroom–>John Smith” collection and flip to Faces view in Classic.

You are now seeing all “Named People” and all “Unnamed People.” Unnamed people are shown by who Photoshop thinks they are most likely to be. You can sort Unnamed people in various ways, but however you do it, you want to get John Smith?s in a contiguous section where you can then confirm or X out. By going into this in the From Lightroom–>John Smith collection, you are not waiting for recalculations against every photo you have – just the ones that Sensei thought should have John Smith.

So the cool trick is this: if you see 106 pictures in From Lightroom–>John Smith, then you know you are probably going to be done when you have 106 confirmed pictures of him. Or done enough. John can only appear in a picture once. There will be a margin of error due to how closely Classic can approximate Sensei, but you can get to about 90% of the Sensei results without a lot of trouble. This is a bit better than Classic on its own, where more pictures of John Smith at an earlier age might be really buried down in the near matches. Further, Classic is something of a black hole for similar pictures because unlike Cloud with Sensei, there is no minimum required similarity score to be a suspected match.

b. You can, of course, drill down on John Smith as a Named Person. You don’t have much control over how “Similar” pictures are ordered (I believe it is degree of match for the face), but here, you can confirm a much more concentrated set (after you decide how to deal with the “fliers” who are not John Smith).

One other technique I have developed while in the “confirming” stage is that it may be easier to confirm en masse (even if some are wrong) down to the point where the “not John Smiths” are about a third of the results in a row of Similar faces. A small number of “fliers” can be removed by going up to the Confirmed pictures, selecting them, and hitting delete. Trying to select huge swaths of unconfirmed faces in Similar and then unselecting scattered fliers tends to really slow things down. As in Classic really slows down as it tries to read metadata from everything you selected.

Incidentally including and then manually removing a few fliers from Confirmed does not seem to affect accuracy (because every recomputation of similarity is on the then-current set of Confirmed faces – changing that set changes the computation). If you have 99 pictures that are right and one that is wrong, it won’t even change the accuracy appreciably. If in Confirmed, you have 995 pictures that are John Smith and 5 that are not, again, the bigger set of correct ones will predominate future calculations.

Next, at some point, especially with siblings, Classic is going to reach a point where Jane Smith (John’s Sister) is going to show up as a lot of the “Similars” with John Smith. When this happens, go back to Faces (top level, always within From Lightroom–>John Smith), click on her, and confirm a bunch of her pictures. When you go back to Named Person John Smith, a lot of the noise will be gone, and hopefully more John Smiths will be visible in a concentrated set you can bulk-confirm.

Crossing back (optional)

I did write “iterate,” right? You might want to keep your Cloud face IDs as complete as possible, since there is not 100% correspondence between results from the methods used by the two platforms. This is relevant if you have already trained Classic on John Smith.

  1. In Classic, note the count in your From Lightroom–>John Smith collection. Say it’s 106 pictures.
  2. Do a search from your Classic Library for all pictures of John Smith. If you used a space in the name, add Keywords to the field chooser menus (via preferences) and select that line.
  3. Drag all of those results to From Lightroom–>John Smith.
  4. Flip to Cloud. They are now in that “John Smith” album. Or they will be when it synchs.
  5. Select all the pictures in the “John Smith” album.
  6. Hit Control-K (or Command-K) to bring up keywords and detected/recognized faces in the “John Smith” album.
  7. Now name any faces that are blanks – but should be John Smith.
  8. Now from the All Pictures view, search for John Smith and drag all his pictures to the John Smith album.
  9. In Classic, check your count. If it’s say 128 pictures, now, that means that Cloud took your examples and found more John Smiths. And now they are ID’ed in Classic as well.
  10. Switch to Faces and confirm the 22 additional faces as John Smith. Now both systems have identical results.

Calibrating your efforts

For searches for random people, Cloud is still the best because it requires very little training. That said, for randos, you are using a tool that does not give you any permanent results. That’s probably ok for people who you don’t really care about. Or if you plan to be on Cloud forever.

For close friends and family, you may just run the “Crossing Over” exercise. I would do it in groups: do a bunch of albums on Cloud (say seven people), then do a bunch of naming on Classic (their collections), etc.

If you are really a neat-freak or compulsive, you could use the “Crossing back” step. But Sensei is reasonably good at what it does, so the marginal effect of adding Classic results to Sensei may not be much. If you have Excire, you might use it to find pictures that look like a picture of John Smith, which will give you a third means of concurrence.

The thing to remember about face recognition is that it is miraculous but also imperfect. It has to detect a face and then it has to identify a face. It doesn’t see how you see. Efficiency works at cross-purposes to accuracy.

But it is still vastly better than trying all of this on your own.

Face-off: Apple vs. Adobe face recognition

So here is a question: what’s the best way to catalogue and tag your pictures? Is it Lightroom Classic? Lightroom Cloud? Is it Apple Photos? Is it something else? Maybe it’s a lot of things. If you are a high-volume imaging-type person, you’ve probably wondered how to deal with things like tagging people. The most macabre application, of course, is the funeral collage. But say you have tens of thousands of pictures of family members and want to print a chronological photo album. Then what? Face recognition features in software may be your best bet. From a time standpoint, they may be your only choice. The problem is that different software has different competencies.

Apple Photos

Something like Photos is designed to group pictures, more or less automatically, around people, events, dates, or geography. Think of it as your iPhone application on steroids. Photos is not big on user control. It is not even engineered to do anything with folders except display them if that’s how photos were imported.

Face recognition in Photos is incremental and behind the scenes: it only finds faces when you are not actively using the program, and over time, it batches up groups of pictures which you confirm or deny as a named person in your Faces collection. To establish your Faces collection, you have to put names on faces in a frame where faces have been detected. This tends to mean that face recognition proceeds by which faces the user thinks are most important. As it should be.

Unlike Lightroom, Photos does not presume that detected faces are unique. It applies a threshold such that if it detects Faces A, B, C, and D, and they are close enough, they are treated as the same (unnamed) person. As such, naming one person can have the unintended effect of tagging a bunch of false matches. Either way, you can error correct by right-clicking the ones you see that are wrong.

My assessment of Photos is that it is not suitable as a face-recognition tool if you have hundreds of thousands of images, for several reasons:

  • Its catalogs are gigantic, even if you use “referenced” images. Photos loves it some big previews, no matter what you do. For scale, my referenced Photos library is 250gb where my entire Lightroom Classic library folder is 40gb (both excluding original image files – so Photos sucks up 6x the space).
  • The face recognition process appears to be mostly (if not completely local), it runs in spare processor cycles, and in my experience, can cause kernel panic. Hand-in-hand with this is the fact that you can never actually turn Photos off. It’s part of MacOS.
  • There does not appear to be any indication that Photos actually writes metadata to files. So when you move to a new application, you’re starting from zero.
  • You can’t really use it in conjunction with a grown-up asset management system like Lightroom.

Photos is, however, good for generating hilariously off-base collections of photos (memories) with weird auto-generated titles (“Celebrate good times” with a crying baby as the cover photo). Or collections based on the date a bunch of pictures taken over decades were scanned (such as my 42,600 pictures apparently taken on December 12, 2008). I actually have no idea where these are generated. But they are funny.

I’m sure Photos is really good for those funeral collages, though.

Lightroom Classic (LrC)

Something like Lightroom Classic (LrC) is designed around manipulating, filtering, and outputting large numbers of pictures at once. This is, indeed, the killer app for handling large volumes of photos, and becomes a single interface for everything. It’s OK, but not great, for face recognition.

To put it mildly, LrC’s face-recognition is processor- and disk-intensive. The best way to use it is to use it on a few hundred photos at a time so that your identifications don’t swamp everything in your collection in a recalculation. LrC is good at showing you different faces all at once, as single images, so you can get cracking on identifying as many new “people” as you have patience for in one sitting.

The top level of the Faces module shows you (i) “Named People” and (ii) “Unnamed People.” You need to name at least one “Unnamed” person to start. After a while, the system will try to start putting names on “Unnamed” people. If you have a Named person named “John Doe” and are presented with an image that is “John Doe?” you can click the check box to confirm it and the X box to remove the suggestion (clicking again removes the detected face zone, such as if the system mistook a 1970s stereo for someone’s face).

Once you have done that, you can drill down on a “Named” person to see what pictures are “Confirmed” and what pictures are “Similar.” Again, to move from Similar to Confirmed requires an affirmative call. Here, you only get a check box. There is no “Not John Doe” option, which means that every possible match is shown, ranked in what LrC thinks is similarity. This is actually problematic because as you confirm more pictures, the number of “Similar” pictures rises exponentially. This puts a huge computational drag on things.

Wherever it happens, confirmation of a face’s identity is an affirmative process that is repeated for each picture (you can select several). This prevents false IDs based on grouping disparate real people into one “face,” but it also makes tagging excruciatingly repetitive. And slow. Highlighting faces to group-confirm or identify can have the “highlight” lagging far after your click. And God help you if you click six pictures and then try to type a name into one to rename all six. It works about half the time. The other half, it auto-completes with a totally unintended name. If you accidentally confirm the wrong face for a given name, you can highlight the errant thumbnail and hit Delete (this is not well documented).

Critically, the top level of the Faces module (where you see all named people as thumbnails) is the only place where the system puts a “most likely name” on unnamed people. Otherwise, looking at any particular “Named Person,” the same person – Bob – might show up as a similar for John Doe. And when you switch to Richard Roe, Bob will show up as a “similar” for him as well. This is part of the reason why people for whom you have 10 actual pictures always show up with 20,000 “similars.”

A big advantage of LrC over other solutions is that you can see and tag faces within specific folders, collections, or filmstrips. This lets you make context-sensitive decisions about who is who. For example, I am pretty sure that my kids did not exist in the 1970s. Or I might know that only 6 people are represented on a single roll of film that constitutes a folder in my library.

When a name is confirmed on a picture, that name is written as a keyword to the metadata in the library. It appears that XMP files (if you chose that option for RAW files) are written with the actual coordinates of faces in the picture, which allows some recovery if you have to rebuild a library from scratch. The important thing is that a picture is keyworded with the right names. Face zones are nice but not quite as critical in the long run because in reality, you only really care whether a picture contains John Doe or Richard Roe, not which one is which in a picture of both.

Always save your metadata to files if working with TIFFs/JPEGs/scans (Command+S) or “always write XMP” with RAW camera files. This helps keep your options open if you want to get divorced from Adobe. Or if your Lightroom library goes wheels-up and you have to rebuild from zero. There is no explanation for why this program just doesn’t write an XMP for every file. It would make things easier.

Lightroom [CC or “cloud”]

What a hot mess. The only thing that really works about Lr CC is face recognition. The rest of it is a flashy, underpowered toy that despite being “cloud” based can still consume massive amounts of hard drive space and processing power. If your photos are in the Adobe cloud, or synched from LrC, the program works with smart previews.

Adobe’s Sensei technology is a frighteningly good face-recognition system. In the People view (mutually exclusive with the Folders view), it takes all of your photos and groups them according to what it thinks is the same face (like Apple Photos). Put a name on that face, and it might ask you if this other stack over here is the same face. It is extremely fast (because it runs in the cloud). Sensei can also identify objects, and to some degree, places in photos. Naturally, the most important people in your life have the highest counts, and you can sort unnamed faces by count and work your way down. Things break down when 400 people have 15 pictures apiece, though…

The system, though, has some amazing limitations that are pretty clearly engineered in by a company that is trying to move everyone to its walled garden. Two of these four bear directly on the issue of why a hard drive – and keeping your own metadata local – is your ladder out of that walled garden.

First, metadata transfers to Lr are one-way. The program can absorb keywords applied in LrC, but not recognized faces/zones, and nothing you input in Lr can ever rain down on LrC. There is no programming-related reason that prevents metadata from flowing the other way, aside from intentionally engineering this out of being possible — so that you are eventually forced to store all your stuff in Adobe’s per-month-subscription storage space. Because paying a monthly to use programs that aren’t really being updated – like LrC – was not bad enough.

Second, you cannot force face recognition on arbitrary subsets of your library, at least very efficiently or intuitively. If you came at this program assuming that it would be like LrC, you would conclude that there is no way to do this. Instead, you have to select a group of pictures and hit Command/Control-K (for “keyword” – how intuitive…) to see the faces present in the picture or group. Lr then shows you the single picture with the face boxes – and the collection of faces in the picture on the right panel. This is great – but why is it so hard to find? You also get the impression that when you do this, the face boxes are generated on the fly. But the critical defect here is that the “named faces” that are shown as thumbnails are even smaller than the other face thumbnails in Lr.

Third, when asked to “consolidate” two faces, there is no way to flip between the two collections. This is an oversight – you are not asked to name a person based on one photo, but for some reason you are asked to make a consolidation decision that could have catastrophic consequences — based on a single fuzzy thumbnail. If in doubt, sit it out.

Finally, you can’t push face recognition data back down to LrC. So if you use LrC, you basically end up with completely separate face-recognition data sets based on the same photos. This is a big-time fail.

Upshot

Well, in terms of applications you can access for a Mac right now, the options are ok – but not great. Stay tuned for Part 2, in which we look at a way to leverage LrC and LR CC against each other to speed things up.

Guerilla darkroom 2020: what to do with all that stuff

So three months went by in the blink of an eye, and I didn’t get around to Part Deux. Ok. Better late than never. Now that you have your unreasonably large arsenal of cheap darkroom hardware in place, let’s talk about some developing techniques.

The Box-Step. I had a professor in graduate school, a colorful character, ex-Marine, current pilot, and general hellion. He would write obscene puns into his own seating chart and then read them back and ask what other hellion wrote them. And then chuckle. There was a Jennifer day. There were pokes at city-slickers who didn’t know what a screw augur was (I’m pretty sure that he left Nebraska before he ever saw one in person). But I digress. His greatest line was that in school, they make you think that everything would be [a tango] but that when you get to the real world, it’s all a [box-step]. The bracketed words here stand in for obscene descriptions of something else. If you’re over twelve years old, you’ll get the joke. But Professor X did have a point: there is too much fanciness and not enough solid technique. And that goes for developing.

Developer. Get out of your head that you are going to do 1+200 standing Rodinal development. Put pyrocatechol-whatever in the back of your mind. Caffenol. Copex Spur-whatever. Buy a packet of D-76 (or equivalent) or a bottle of HC-110 (1+31!) and take it from there. Dig up your film manufacturer’s data sheet. Not “the Massive Dev Chart,” which I can tell you firsthand has some unusual and very obviously wrong information in some entries. Start with basics. Start with the book. The brave men of Kodak and Ilford killed themselves working on these meticulous tables. Do honor to their memory.

Mixing. Mix your developer well. Don’t be afraid to use very warm water with D-76. It’s actually shockingly difficult to break, cooling time is harmless, and solidified powder at the bottom of a bottle is unrecoverable. Let your developer sit overnight so that it returns to room temperature.

Temperature control. Here’s a life hack: if your darkroom is within 5 degrees C of any of your data sheet’s developing times, you temperature control your developer (only) and leave the others at room temperature. This is part of the reason you let the developer sit overnight. Five degrees C is not enough to make a difference for stop bath, fixer, or anything else. Most basements seem to be at almost exactly 20 degrees C, which is why that is a good temperature to pick. Most tap water is easy to get close to 20º C because it is traveling through pipes in earth that is 20º C.

To rapidly warm developer, put the bottle in a tub of warm water and monitor the temperature periodically. Do not let the thermometer touch the sidewall of the bottle, and agitate the bottle every few minutes.

To rapidly cool developer, pour it over a reusable “ice pack,” be it the kind that is like a foil sheet of ice cubes or a solid blue plastic block. This way the temperature goes down without dilution. Otherwise, you can lower a plastic bag full of ice cubes into your container of developer to cool it down.

Development time. Like I said, if your room temperature is within range, pick the time/temperature combo on the data sheet and run with it. If you don’t have a data sheet, a good starting point for normal-ish developers and normal-ish b/w film is 7 minutes at 20º C.

How do you calculate that time? The first question is “small tank” or “big tank.” Generally, for an eight-reel Paterson, you’ll use the big tank. Surprisingly, you will be fine using that for the 2-reel version. Small versus large tank in Kodak parlance is mainly a function of how easy a container that size is to agitate. You will not be rapidly flicking 2.5L of liquid in a tank with one hand.

Do you start the timer when you start pouring developer in or when the tank is full? It actually doesn’t matter, as long as you always do it exactly the same way. I generally start the timer when the tank starts to sound full (you will hear a gurgle) and take the first couple of seconds of the timer to fill the top.

Fill level. The tank should always be full enough that at least 1/3 of the light-trap cone (this is Paterson, remember?) is filled with developer. Do not do the bare minimum. Modern films have surfactants (like soap) in them that make them wet more evenly. This means bubbles. And your bubbles must have a place to go, above the film. Unless you want weird dark spots on your clear 35mm skies.

The burp. Get that lid on. Press hard in the middle to force the air out and make a tight vacuum seal. Hit the bottom of the tank on something reasonably firm (but not concrete!) so that any air bubbles release from the film. Do an initial agitation per the instructions. Then open the lid and let the bubbles bubble over the sides of the light trap cone. Reclose and start your cycle.

Development and agitation. Programmatically, this is how I would execute a 7-minute development with a 2.5L (8-reel) tank. This is based on “large tank” assumptions. The large tank format provides less streaking through 35mm film holes, and you can pretend it is more like standing development. In my exercise, these are the times shown on the timer (any waterproof digital kitchen timer will do, preferably one that counts up after it runs down to 0).

  • 7:00 (not running yet) – start filling tank from a container that can pour fast, like a wide-mouth bottle (see previous article).
  • 7:00 – start timer with tank almost full.
  • 6:50 – agitate and “burp” the tank.
  • 6:05-6:00 – end over end 5x
  • 5:05-5:00 – end over end 5x
  • 4:05-4:00 – end over end 5x
  • 3:05-3:00 – end over end 5x
  • 2:05-2:00 – end over end 5x
  • 1:20-1:15 – last real agitation
  • 0:15 – pour straight down into a wide-mouth container
  • + 0:10 to +30 – fill with stop bath and rapidly agitate

You’ll note that this seems none-too-precise. The fact is that it takes about a 10% difference in developing time to make for an obvious difference in the end-negative (N+1 needs 25%, and N+2 generally 50% extra). 7 minutes is 420 seconds. So even if you have 15 seconds of “imprecision” in the process, it is not that impactful (example: how long is the stop bath taking to fill?).

If you can do the process consistently, then all you have to do after that is dial back your total time as needed to adjust the contrast of the negatives.

Push/pull processing. Shooting Tri-X 400 at EI 320 is pointless. It’s not significant for most purposes. Shooting Tri-X at 1600, though (see top picture here) can be helpful. Push processing generally brightens the highlights by making them more dense on negatives. It does not, repeat, does not really change the speed of the film, which is defined at midtones and below. So you tend to get normalish pictures from mid to high but a lot more blackness below middle grey. Pushing is good for overcast days or flat light; it is not very helpful if you generally lack light. Pulling supposedly improves shadow tones, but modern, straight-line films just need more exposure.

Standing processing. This is mainly for when you have an emergency or can’t identify what film is in that bulk canister. Standing processing tends to compensate all over the negative so you have a moderate tonal range. The downside is that it is a moderate tonal range that tends to defeat the “curve” built into the film and is miserable to print on RC paper. Standing processing takes a long time. Standing processing can lead to streaking. Standing processing sucks if you don’t actually need it. As a good friend of mine told me, standing development is good for taking pictures of lit filaments in lightbulbs and outside of that, covering screwups. Like communism, everyone thinks this would be a good idea if someone could just execute it correctly.

Pyrocatechol. Isn’t it amazing that a chemical that causes cancer can’t cure people’s poor photographic technique?

Caffenol/urinol. I’m not sure if the latter is real (I read about it in a lab book), but if you’re too cheap for HC-110 or Rodinal, you probably shouldn’t be using film.

Exhaustion. If you stick to 20 rolls of film per gallon of developer, it’s generally unnecessary to adjust the development times for successive batches. You pour the 2.5L of used back into the big container (1 gallon, 5L, etc.) and then pour from there for the next batch. Why does this work? Because 2.5L of developer is almost double what you actually need to develop 5 rolls of 120 or 8 rolls of 135. This is because exhaustion of developer is a function of film area (expressed by Kodak as square inches it’s about 80 for a roll of 120 or a roll of 135). It’s not how many rolls. It’s how much surface.

Stop bath. The only thing that stop bath does is change the pH of the film to arrest the development. Indicator is best. Ilford odorless is the best of those. You could probably use vinegar or even water to do this, but stop bath is cheap, and there is no reason to take chances.

Fixer. Fixer usually takes the solution back up to acid (a couple fixers are actually base in nature), which is why it is an archiving problem. Start with the fixing time on the bottle, but you can also take the cut (and undeveloped) end of a piece of film, drop it in the top of the tank, and monitor until it goes clear. Double that time, and your film is generally fixed.

Fixer does not take the purple stain out of film. It removes the unexposed silver, converts the exposed silver, and takes off the anti-halation backing, which is the milky opaque stuff on the back of the film. Anti-halo dye is generally removed by the developer and the fixer remover. And failing that, just put your b/w negatives in the sun for a little while.

Fixer remover and rinse. This process neutralizes the acid fixer and finishes off the dye. Take the light-trap cone out of the tank. Fill your tank with plain water and let it sit for a minute. Dump and refill with water plus a capful of Heico Perma-Wash. Let that sit for five minutes. Dump it out and see all that purple dye go down the drain. Your final rinse is 5 minutes or eight changes of water. That’s it.

Wetting agent. Photo-Flo 200 is designed to be used at 1:200. Try to understand what that means. Generally not more than half a cap to a tank. Too little, and it doesn’t work. Too much, and it gets gummy and nasty. May I recommend this? If your arm-span is long enough, hold the film in a U over a vat of water and Photo Flo. Run it back and forth in the U, dipping the “vertex” into the solution. This technique uses far less solution and also prevents Photo-Flo from getting all over your tank and reels. This U technique – which I cadged from an old Kodak instruction manual on developing orthographic film – helps make sure that the solution sheets off quickly, especially when you finish the cycle (I recommend 10-15 cycles of the U). For this solution, I would recommend distilled water with the Photo-Flo, although you can still get occasional water spots no matter how pure the water.

That wetting-agent contamination is not a big deal (note as above that “bubbling” when you add developer is actually coming from a coating on the film, not some insignificant amount of Photo-Flo residue), but it it doesn’t take much to hang up the little ball bearings in plastic reels.

Hang dry. Hang up your film in a reasonably humid area (basement or bathroom). This allows slower drying (less violent curling) as well as helps cut down on dust. Never, ever, never let drying negatives be so close to each other that they can kiss. If the emulsions get stuck together, it’s game-over.

See how you did. If your negatives are too dense overall, cut back on exposure. If they are thin but have blown-out highlights, you need more exposure. If they lack contrast, extend the development slightly. If they look bulletproof, cut the development slightly. This is a learning process. Note that in an era of scanning, overexposure is not your friend because scanners struggle with dense silver hightlights on negatives. For optical printing, you want normal if not beefier negatives, since there is a ceiling for improving contrast (5+ on Ilford papers).

Guerilla darkroom 2020: hardware selection

Well, it’s been almost 20 years since I’ve did any updates on the original Guerilla Darkroom on the old site, so let’s bring things forward to this year. I’ll assume that the purpose of your darkroom work is getting to negatives for scanning, though almost all of this applies to regular printing.

Goal: get finished negatives. Do not scratch. Don’t go broke. Use what you have on hand. This part will deal with the equipment side. The next installment will cover chemicals and some finer points of (or really, cheats at, technique).

Special hardware

The three critical pieces of infrastructure that you do not have at home are (1) a developing tank and reels; (2) a changing bag; and (3) a thermometer. Let’s take these in turn.

First, get a Paterson Super System 4 tank. A new one (old ones tend to get chipped around the base, and their locking lugs may be loose). A Paterson Super System 4 developing set (tank, agitator, 2 reels) is $34 on Amazon. It’s hard to beat that. Consider that you may want to develop more than one roll of 120 at a time; realistically, this calls for a Multi-Reel 5 or larger.

Don’t screw around with Samigon/AP/Arista clones of older Paterson System 4 stuff.

  • Old-style tanks are not much cheaper.
  • Old-style tanks share the vice of older System 4 tanks: using a gasket to seal, being really easy to cross-thread, and therefore leaking all the time. Super System 4 uses a rubber cap over the whole top, and its funnel/light trap bayonets in.
  • Super System 4 can be agitated using a key that fits through the hole in the “funnel.” This is like having a vertical Jobo.

Do not complain about how much tanks cost. Film photography is expensive. It is a luxury good. You picked this path. Tanks are a critical piece of the developing puzzle.

Steel tanks are functional and use less liquid, but they require a lot more skill in loading film onto their reels. The big argument for steel has been that plastic reels degrade over time. That’s not borne out by my experience; I have some plastic reels that are 20 years old now – and still reliably load 120 film. It all boils down to keeping the ball-bearings clean and not warping the reels through hot water or abuse. Steel reels also are single-size: so you have reels for 35mm and reels for 120, and never do the twain meet.

As to reels, there is little to recommend actual Paterson-brand reels (except that they are basically free with the Paterson kit pictured above). Any compatible type will work, with Samigon/AP/Arista reels being slightly less nice but having a slight edge for newbies because they have loading ramps. Note that with these ramps, you will have to separate the two halves of the reel to safely remove the developed film. With no ramps, you can flex it out if careful.

Second, get a big changing bag. You will use this in lieu of a darkroom for film work. Some bags at Adorama, for example, can hold a Paterson 8-reel tank. To be frank, there is nothing to recommend finding an actual dark room. The inevitable result is that you notice little pinhole light leaks and freak out. Or you get disoriented and misplace things. With a changing bag, you are no worse off for not being able to see what you are doing, plus you can watch television while you load reels. Just don’t wear your Apple Watch or your tritium-lumed vintage watch. Actually, you shouldn’t do that in any circumstance where you are loading film into tanks.

Do not waste time trying to improvise a changing bag. Yes, there are Depression-era guides that tell you that they can be fashioned from sweatshirts, etc., but film had a much lower speed back then, and if you get light-struck film, you waste all of the efforts you made shooting pictures in the first place.

Finally, get a good glass thermometer that can go several degrees above or below 20C and has fractional gradations (recommended: Paterson PTP381, 15C to 65C). Metal thermometers are sometimes hard to read, can fog up, and never seem to be as accurate. You won’t break the glass thermo as long as you keep it in its square-profile tube. This is $25-30 well spent, since an accurate thermometer can mean the difference between usable and unusable negatives. Overly dense negatives are not fun for printing and really not fun for scanning.

Other hardware (not so specialized)

Timer. Could be anything that can be set for a time between 1 and 7 minutes. LCD kitchen timers are great. Anything that disappears when not stimulated (like the iPhone clock app) is not. Try getting that phone unlocked with wet hands. The Massive Dev Chart app has timers built in. And noises. And klaxons. We’re easily amused.

Film leader retriever. This can be used for two different operations. One, you can retrieve and trim the leader square at the end (if you bulk load film, and your camera has a rubberized takeup spool, you may have just left it square). Bonus points for rounding the corners to make the film load smoother into the reel. Two, you can pull all the film out of the cartridge, which obviates opening the cartridge (generally something you would do with a bottle opener – caps are crimped on really, really hard). Many people reload commercial cartridges by leaving a little film out and attaching the new film to that. Here is the Ars Imago (B&H house brand?) version ($10), which is the latest knockoff of the classic:

Scissors. You can use any household scissors. I would recommend something sharp that cuts straight. So not pinking shears.

Measuring vessel. A 1000ml graduated cylinder is customary. If you use HC-110, gradation in ounces may be more practical (since you mix 4 oz of developer to 124oz water to get 1+31, i.e., dilution B). If you want to see a real artifact of the past, some British grads have imperial ounces as well as US ounces and mL.

If you want to get really lazy, you can measure exactly 1 gallon of water into your storage bottle (or 4L, etc.) and mark with a line where the water level is. Dump out the water. From then on, you only measure the concentrated developer and simply fill with water to the line. Surprisingly, or maybe not, the width of a chisel-tip marker line is precise enough. Make sure you use this special bottle on a level surface.

Storage bottles. Bad news here: the thin 1-gallon bottles used for distilled water make really poor darkroom storage bottles. They do not seal well, and the thin plastic is permeable to oxygen. That said, if you are not storing chemicals for more than a month, no problem. Eventually, you will want to save any 1 gallon or 5 liter bottle from store-bought photo chemicals and repurpose it for storage of diluted chemicals. For example, I have an old Photographers Formulary TF-4 concentrate bottle that I user to store diluted Ilford fixer.

Dump bottles. Your life will be a lot more fun if you can quickly dump chemicals when you change stages of developing. The dollar store had some cylindrical 1-gallon cereal containers marked off in liters and fractions of a gallon. With a 20cm opening, these can catch your dumped chemicals. Key qualities of a dump bottle:

  • Has a wide mouth so that a tank inverted above it will dump straight down.
  • Holds at least 2.5L of liquid – the capacity of the biggest developing tank – and preferably a gallon – 3.8L – because you can also use it to mix chemicals. Try stirring chemicals through the opening on a milk jug.
  • Has straight sides.
  • Has something to hold onto (like indentations) and is not slippery. Developer is basic (not acidic), and you will find that like soap, it makes everything it touches slick.

These do not need elaborate seals or even really to be airtight because you are not using these to store chemicals. Having lids is preferable

Kitchen-type funnel. You already have this, though I don’t recommend using it for food or drink thereafter. If you have a spare Paterson “cone” for a developing tank, that also makes a good funnel.

Drying rack. For rolls of 24 frames or 120 film, you might find that a rolling laundry rack with a “grid” style top shelf is very practical (if you already have one). You can clothespin the film to the grid, and use more clothespins to weight the film ends. Film does not curl as aggressively as it used to, so you don’t need weights.

If you don’t have a rack like this, your “top” can be made from a two-clip trouser hanger that you already have in your closet – and hung off whatever is convenient (overhead pipes, usually).

36 frames of film require a lot of space and long arms. This requires being hung from the ceiling.

Not critical.

There are several things you can dispense with:

  1. Squeegees. These come packed with some older developing sets. They can be used to dry film faster. They are also good at scratching film if you don’t keep them clean. With the right wetting agent and a not-too-dry environment, film dries on its own in about an hour anyway.
  2. Weighted film clips. Not really needed.
  3. Hose-type rinsing attachments. If you use hypo-clear, the wash time for 35mm is not very long anyway. Plus these attachments tend not to fit any modern faucet. The longer you run water, the more likely you will have a temperature transient that can ruin your film.
  4. Forced-air dryers. If you are a photojournalist in 1965, and you have to rush out that print for the rotogravure section. yes. Otherwise, they are space- and energy-intensive. And are actually frustratingly slow.
  5. Sous vide heaters. Much the rage for color, if your bent is black-and-white, you don’t need any artificial temperature control. I’m as much a fan as anyone of using kitchen tools, but you can leave this one alone.

DX labels: you’ll thank me on your wedding night!

Every man with a hobby or particular skill likes to publish a self-serving, single-criteria test of manhood: whittling, hunting, tiling a bathroom, fishing, purifying rain water, rebuilding a Cleveland V8, growing hydrangeas, surviving a Turkish prison after a bad rap for hashish, brewing beer, operating a sailboat, bedding a strumpet, making an adequate gin & tonic, constructing your own lightsaber, &c.

Now I say unto you that you will not truly be a man mature adult unless you can generate your own DX coding stickers decals so that you can use underwhelming offbeat slow-speed film in your way-too-expensive point-and-shoot compact camera. Or get your camera to read your Tri-X as 320 because your technique is that good, your meter is that accurate, and that 1/3 stop makes a huge difference. And because you’re too lazy to turn that ISO dial!

I was actually doing the former – trying to use 50-speed film in a Canon Sure Shot (Prima) 120 Caption, a phenomenal camera that oddly defaults to ISO 25 when it can’t read a DX code (the reliable plastic bulk loading cassettes are uncoded…). You just can’t overexpose Pan F Plus… and try using a P/S zoom at EI 25… and what better excuse to trash my home office with bits of paper and foil? And naturally, a child in the household had stolen the only X-acto knife with a good blade, so I wasn’t going to do it by hand.

Commercially-available DX labels are limited in ISO choices, and they are also surprisingly expensive. Also, film photography these days is about reinventing the wheel. You can make decals, in a completely overwrought and overly-technological way using a machine that might already be in your household: the pattern cutter (Cricut, Brother Scan ‘n’ Cut, etc.).* We have the Brother,** so you may need to adjust your technique slightly for the Cricut. A Brother has two funtions: drawing with a marker and cutting with a blade. We will use both techniques.

*I am fully aware that this is most likely to be in your household if you already have a spouse, and that the only way to get a spouse might be to perfect your DX decal skills, which is hard to do without a pattern cutter. Such a conundrum! Better brush up on your beer-brewing.

** The Brother is way more goth than the Cricut.

You will need: your cutter, its pen and knife attachments, a roll of commercial film for reference, a DX decoding chart (available online), some half-page (Ebay) labels, and a roll of self-adhesive metal foil (0.05mm / 0.002 inches or thicker). It can be any metal you want (aluminum, stainless, brass, copper), as long as it is conductive.

The drawn outer box. On your design software, make a box that is 33x15mm. Designate that “draw.” This will contain two rows of six boxes, each 5.5mm wide and 7.5mm high. Make these 12 boxes and position them in a grid. Looking at your DX chart, color the boxes you want to be insulators (i.e., black and not silver). Fill color doesn’t matter. These should be “draw” shapes.

Your DX code. Look at the decoder and figure out what film speed you want. That’s the first row. For the second, row, number of exposures, I would recommend 36 (so the 2nd and 3rd spots insulated). If your camera reads exposure count, it will then rewind neatly so you have 6 strips of 6.

Negative space (conductors). Now change all of the little white boxes (the ones you did not color in) to “cut.” Where they are touching, merge them. In the ISO 50 example in the pictures up top, these will result in one L shape and one T shape.

Optionally, you can also delete the color-filled boxes because they were only there for reference. Your finished label can use white paper as an insulator. But it also looks cool if you leave the solid boxes. That’s what I did for the pictures.

You can also add something to the top or bottom of your big box to remind you which direction the decal points. I make an extra 3mm box that I point at the 35mm cartridge opening. I suppose you could make a really long one if you wanted to.

Clone your decals. Now draw a selection box around your DX decal design and “group” it using the design software. This will allow you to clone and arrange copies without having any of the elements get out of place. I made two rows of 5, spaced 30mm top of one to top of the next, 50mm from left edge to left edge.

Draw the decals. Move the design file to your cutter. Insert a sheet of label paper. Run a “draw” pass. This will sketch the outline of the DX decal, and if you left them in place, draw in and fill the insulator squares. If not, you will just see the outer 33 x 15mm rectangles.

Cut the codes. Now run the “cut” pass. This is where the magic happens. Do it with a “kiss cut,” or the type that does not cut through the lining of adhesive material. When the cut pass is done, you can pull out (I think they call it “weed”) = the shapes corresponding to the “conductors” – so I pulled a T and an L. You will see the shiny label backing through the holes.

Cut out all the decals as a group. Now cut around all of your labels as a group (I recommend scissors, but you could automate this). This will give things structural integrity because you will next peel them all off in one piece and set them on the top side of your metal foil (your “insulators” should all be attached at a minimum of one edge to the “frame”). From there, you can cut your individual labels as closely as you want.

Trim and apply. Now your metal foil holds everything together. Peel off its backing, position the decals on your cassettes using a commercial cassette for reference, and validate using a DX camera, preferably one that shows you the selected ISO. On a Nikon, for example, you can put the cassette in, close the back door, and if your ISO is on DX, all you need to do to read the cartridge is hold down the ISO button. Do this for each cassette.

You can obviously re-use your design file to make more – and it’s pretty easy to change ISOs in your design file. Just keep a master file in which all 12 of the little boxes are still separate.

You’ve made it! Years from now, when you have 2.5 children, a happy domestic situation, a great job, and a really cool electric car or carbon fiber bike, you’ll know that all this work paid off. If we don’t get to talk then, you’re welcome.

Fadeout: Ilford Pan F Plus

If you’ve never wondered what it’s like to be at a stage of your life where you feel like you are just waiting to die, I recommend bulk-loading Ilford Pan F Plus and not using all of it before the end of summer. When the light gets poor, using up a roll of film this slow can be as excruciating as watching your grandmother shooting a single roll of 110 film over three Christmases.

Pan F Plus is described as “35mm, ISO 50, high contrast, super sharp black & white film with very fine grain. Ideal for studio photography and bright, natural light.” It has considerable charm and makes great pictures:

  • It includes fine grain and a ton of contrast, no matter what you use to develop it (HC-110 dilution B, however, has a very, very short development time).
  • It also makes it easy to shoot outdoor pictures with phenomenally shallow depth of field (witness above, a 50/1.4D AF Nikkor).
  • It holds overcast skies reasonably well.

It’s a classic b/w film, with a classic film speed. It is not a specialist film, as some might claim. It’s actually what a normal film would have been 50 to 70 years ago. It’s no Tech Pan. As a historical note, the Kodak closest product would have been Panatomic-X at a blistering 32 ASA, discontinued in 1987. Panatomic-X was also a general purpose film.

If you shoot medium format, an ISO 50 film can be something of a hair shirt, since it is difficult to get hand-holdable exposure with lenses that often have f/3.5, 4.5, or smaller apertures unless it’s a bright, sunny day. And sadly, most medium- and large-format lenses perform poorly wide-open. Shooting this with a medium-format SLR? Hope you have a sturdy tripod. Thirty-five millimeter, though, gives you fast lenses – which makes things more fun.

That said, the most curious – and soul-crushing – feature of Pan F Plus is its tendency to disappear. The impact of this image fragility is that you pretty much have to develop what you shoot, as soon as possible after you shoot it.

Although this keeps your photos current (by force!), you also find that it’s just as much work to develop one roll of film as eight. I asked Ilford for an explanation of why latent images fade so much faster than with any other film. My smartarse best-guess hypotheses were:

  • Somebody made a bad bet with the panchromatic doping back in 1992, and nobody bothered to change the formula to keep the image longer.
  • Kodak fans like to joke that Ilford makes the second-best product for any application, and Panatomic-X has left the room. Of course, the same Kodak fans like to needle poor old Tri-X, too.
  • Being owned by a pension fund (or venture capital company) means never having to say you’re sorry. Unfortunately, the income-generating pressures on both Kodak and Ilford have borne this out: some product has disappeared, and everything has become more expensive. Because shareholders.

The actual answer is (direct from Ilford staff – hooray for answering!):

a compromise with some other desirable characteristics. The basic formulation is probably the closest to the original of all our film emulsions even though it was updated several years ago. We have customers who are very attached to its particular curve shape and any emulsion redesign would inevitably change that so we are reluctant to touch it at the moment. However, we do review all our products and it is likely that at sometime in the future we will probably either update Pan F+ or replace it.

The note went on to explain that you should refrigerate the film after exposure to forestall this. Some of these points are expected (people liked the look…. refrigeration slows down chemical activity), and some are puzzling (it sounds like some Ilford formulas changed a lot). I like this answer. It means that one day, forgetting a roll or two of shot film will not spell disaster.

But you have to wonder: if I waited long enough, could I keep shooting the same roll of film over and over and over again, and only develop it when I had shot 36 frames I liked?

Of course, during a quarantine, anything passes the time.

 

 

 

Sony GPS-CS3KA: we’re all seekers

Sometimes you see a photo accessory and wonder, “where the hell were you all this time?” And the answer is, “it was too easy, so Sony canned it.” The GPS-CS3KA (“GPSman?”) is a smallish box, maybe two-thirds the size of a Metz 26AF flash. It only really does two things: (1) keeping a track log from GPS signals it receives and (2) writing them to the JPGs on your SD card.

Note: Flashair – which has a built-in 802.11 transmitter – has much too high a current draw for the 1.5v battery powering the Sony GPS unit.

A reasonable solution to a stupidly common problem?

Wait? What? Most GPS solutions for cameras have been pretty terrible. For reasons that are unclear (perhaps metal covers), high-end cameras have not had built-in GPS. In fact, few cameras period have it – aside from the ubiquitous iPhone or Android. This leaves you with some suboptimal options:

  • Keep a tracklog with a separate device (GPS watch, tracklogger, battery-intensive phone app) and marry the coordinates to the files in Lightroom or Exiftool.
  • Use a separate device with Bluetooth to feed coordinates into your camera’s remote port (a la Red Hen).
  • Use a clunky GPS add-on that takes up both your remote terminal and hot shoe (looking at you, Canon and Nikon).
  • Try to graft an NMEA cable to your DSLR’s accessory port.
  • Use a clunky grip with GPS built-in (Leica Multifunction Grip M)
  • Stick a GPS in some other accessory, like an EVF that you might otherwise not user (Leica EVF-3).

Sony quite possibly solved this problem by accident with the GPS-CS3KA, which takes a reading every 15 seconds into 128mb of memory – and when you insert an SD card will look for the closest matches and tag your JPGs in batches of 60. I say “by accident” because operation is far to simple for a Sony (at least compared to a Bravia TV). There are only three options:

  1. GPS: display GPS screen – hitting enter gives you different permutations of time and GPS coordinates.
  2. Match: automatically counts the number of files to be tagged and only lets you start or cancel. Matching stops the GPS reception.
  3. Tools: set the time zone, undo-ability, and erase internal memory.

How does it work?

  • Stick a single AA battery in one slot.
  • Set your correct GPS plus or minus time zone (as I write this, -400 for Eastern).
  • Turn on the machine.
  • Shoot a bunch of pictures.
  • Put your SD card in the slot.
  • Use the “matching” function to assign locations (use “undo” to clear all of the data you just wrote).
  • Repeat as many times as necessary in batches of 60 files.
  • Done.

Note that when you initiate a card matching session, you may lose the GPS signal – but then again, you won’t be shooting pictures while your card is in the device.

Performance

GPS performance is actually quite good. Cold start will grab coordinates within about a minute; on a warm start, about 10 seconds. Your initial startup will be minutes as the device updates its GPS satellites. The device apparently can read a signal in many indoor settings, which is neat. Or scary.

My performance tests on accuracy landed this within about 15 feet of where I was standing. It does read out in minutes and seconds too. For most purposes, it suffices to see degrees to know that it’s locked on.

Observed battery life with alkaline was about 12 hours. Not terrible, considering how much power this probably draws.

I did not test the Sony software, but I did note that connecting the USB cable does not bring this up as a drive with an easy-to-access GPX log.

Where does it work and not work?

I tried this a Sony A7rii and with cards up to 64gb. The results were better than expected for a device this old.

Cards that work: up to 32gb only, the faster the card, the better (realistically, that’s a Sandisk 95mb/sec card).

To be safe, I would recommend using SDFormat and not opening cards with files on a Mac before encoding. Macs tend to throw indexing files on disks that are invisible to the user but can hang up particularly primitive embedded devices (of which you should assume this is one).

Cards that don’t: 64gb and up; WiFi-enabled cards. I suspect that 64gb is outside of the ability of the device to read cards (even devices that read FAT32 sometimes cannot address an entire card). You get “matching error” as your only clue. As to WiFi, my best guess is that since it works for a couple of frames and then blanks, that the card sees that x files have been read and that it’s time to turn on the WiFi. The problem is that one AA battery doesn’t have enough power to allow that. In my testing, there has been no way to shut off the FlashAir’s desire to start transmitting (unlike EyeFi, which could be set to transmit only images that were write-protected).

Files that get encoded: the spot of bad news is that the current ARW raw format doesn’t get location data with the Sony GPS. But since the device will record location data onto almost any JPG, it will work equally well (or poorly) with many types of cameras.

Assessment

Within the limits of a certain card size, and therefore speed, the Sony GPS does allow a relatively automated geotagging process for JPGs. Like Lex Luthor’s henchmen, it has “one job.” But unlike those people who never succeded at killing Superman, the Sony performs that job well.

Notably, you can generate tracking data usable with multiple cameras, since you can insert SD card after SD card and use the same body of GPS data to code files shot in the same time period. This is a bit more flexible than solutions that would have to be transferred from camera to camera (or just duplicated with good old cash). It does require than your cameras’ clocks be synchronized reasonably closely.

It does not solve the problem of writing geolocation data to RAW files (Lightroom, for example, simply ignores this data if you import both tthe RAW and the JPG), and no one will likely ever solve the mystery of why cameras don’t have inbuilt GPS. But it’s a lot better than trying to marry track logs and files by manual labor.

Archivism: immortalitas vel non

Everyone in this picture is dead. The man on the left could not beat actuarial tables. The next man over, in the yellow, had a stroke. The teenage girl died of breast cancer. The boy met an industrial accident. The lady in blue was hit by a car. And the guy on the right was killed when his girlfriend’s husband came home unexpectedly.

One. Ok, so I made that all up. What I do know is that this picture is from Rio de Janiero in the spring of 1979. I know my grandfather took it. I know it’s on Ektachrome, in a Bell & Howell slide cube, in a tray of slide cubes, in a box, in my basement. And that is all I know about it.

Two. For fun, I put to a Facebook film group the question of how to deal with this — and thousands of other slides that contained no people that I (or any other living person) could identify, with little artistic or editorial merit (I could easily pull out the ones with family members, which is a small fraction). This was due to being lazy; I could have just fed these into a Nikon LS over a few weeks. I asked what lab could scan pictures like these so that I would be “done” with them, throw them out, and free up some physical space. The reaction was as expected. What? Discard originals? They are more archival than digital, so why downgrade? The reactions ranged from puzzlement to indignation.

Three. Part of the difficulty in dealing with modern photographers is the idea that every sperm is sacred (apologies to Monty Python…) and that you can never, ever dispose of a physical piece of media, no matter how worthless. I chalk this up to being an artifact of digital – people don’t edit their digital work because storage is cheap. That carries over into a feeling that one can’t dispose of any piece of film, ever, never, not ever. Also, when film is expensive, you’re throwing money away, right?!

Do these guys know that in ye olden days (meaning just 25 years ago), people tossed slides all the time? I mean, there is no rotary slide magazine that is a whole number multiple of any length of film, unless you were shooting old rolls of 20 and hit 100% of the time… and not even the Almighty shoots that many keepers. Before matrix metering, it was hard as hell to shoot slides. Ok, shoot them well.

Do they know that when you’d pick up prints from a minilab, you would put the rejects right in the trash? How about leaving those neatly scalloped four-frame strips of badly stabilized C-41 negative in an acidic paper envelope for fifteen or so years?

Do they know that when you only get one frame to come out on a roll of film, you don’t have to save all six strips of negatives? Or, if you don’t like that one frame, any of them?

Do they know people threw away test rolls all the time? Today, I was adding up some numbers and figured out that I had shot about 1,900 rolls of film in 25 years – and that I had probably pitched fifty whole rolls of test pictures.

Four. The archival film protection business had a boom in the 2000s. Granted, old vinyl photo pages were a train wreck. “Try our new polyethylene ones. They last for centuries!” There was always something new: non-acidic fixer, paper, binders, sleeves, chemicals. Your pictures will live forever. Forever, of course, was a lot shorter time when everyone smoked.

With digital imaging came “archival” inkjet paper and the thousand-year, erm, hundred-year archival, pigment-based inks. Pushed partly as a way to justify charging big money for inkjet prints perceived as less valuable than chemical prints, these new materials turned out to be a way to perpetuate prints of bad pay-to-play nudes, early Photoshop compositing abominations, and anodyne and provincial landscapes. Had this work faded faster, it would have been immolated in trash-to-energy plants before that method of waste disposal was outlawed. Now they just stuff landfills, visual interest improved occasionally by the overturned bottle of Palmolive thrown in on top of them.

Today, we worry about the longevity of digital. You could record things on Mitsui gold DVDs. Or M-Discs. Or asynchronous offsite backups. Or in the cloud. Or in a holographic data storage array in a quartz crystal when that day comes. The possibilities are endless because we are constantly coming up with new ways to hoard and new ways to pack bits into smaller spaces using more permanent materials.

Five. As John Chrysostom would have said in the 400s (or actually did say…) “all is vanity.” Somebody once said that you don’t die until the last person forgets you. Many cultures and people have taken credit for this line (I first heard it on Westworld), but like all good retransmissions (or appropriations) of someone’s culture, it gets recycled because it actually is useful.

When we think about photography and archivism, we might be solving for the wrong variable. We try to make everything last forever using blunt force. The actual problem is motivating preservation in others, not in achieving it ourselves. You might think that color film will fade in 20 years. Or black and white in 100. Or that your prints will discolor and fade. Or that JPGs will somehow be obsolete in the future and unreadable.

The real danger is not time, or technology, or the elements, or phlogiston. The real danger is that the work will fall into the hands of someone with no interest in it – or for whom the effort of understanding the work is overwhelming compared to any potential benefit. When you’re at a secondhand store looking in that shoebox at the counter (or were, in the Before Times), you always wonder what kind of philistine gets rid of family pictures. Well, it could be you. Or me (see above). Or our children. All it takes is for someone to be looking at a collection of random pictures of strangers and to give a shrug of the shoulders. Someone to decide that there is no room for one more photo album. Or no point in renewing a cloud storage subscription. Or that they need that 12tb hard drive for something else. Or they lack the decryption key to open the drive with the files (nota bene: this is coming).

Six. Things become valuable for a couple of reasons: intrinsic value and attrition. An Ansel Adams print would be valuable even if the supply was less finite. By the same token, we preserve a lot of historic buildings and cars that were poorly designed or poorly made — but are the last exponents of their age. The average person has no ability to influence this aspect of his or her photography except (a) to be brilliantly good (bonus points for the back story that includes dying young of consumption) or (b) have his or her output survive some extinction event that wipes out trillions of other images. Let’s all shoot for “brilliantly good.” Dum spiro spero.

Seven. Maybe what we should do is not fixate so much on the hoarding so much as encouraging future preservation. Is it an uncomfortable subject because it’s not something you can buy?

  • Things that are accessible are more likely to be enjoyed. That might be a printed photo album. It might be one that is shared online.
  • Label, organize, and give people a reason to save your stuff, long enough for it to become valuable (enough) to strangers. Why does this picture matter? Even banalities of everyday life can matter later. What may be an unimpressive picture of a hotel today might be the only visual representation in a future in which it has been knocked down.
  • Follow directions when processing your materials. You might be surprised at how long “non-archival” material lasts. In fact, the pictures in that shoebox in the antique store – printed on acid-containing paper and probably not properly fixed by today’s standards – are a hundred years old and have outlived the use anyone had for them.

You might find in the end that your time and money is better spent on life experiences than making the record of it last just a couple more years longer. If you do good work and give it meaning, people will find a way to preserve it.

 

Lomo LC-A 120: same disillusion, bigger package

When I was a second-year high school student, my English teacher came in, opened his copy of Adventures in American Literature to a poem, and (purported to) read the following:

I think I shall never see /
A poem as lovely as a tree /
Blah blah blah. Bullshit /
I hate Robert Frost /

It obviously was Joyce Kilmer and not Robert Frost whom he was skewering, but he was making a point. Although teaching methods like this might not seem as radical today, it’s hard not to have that Robert Frost feeling about “Lomography.” Some talent. But mostly boring pictures that are made interesting by lens defects, art defined by intentional and random flaws in raw materials, and a semiotic that has become so routine as ot disappear into the noise of Flickr.

The Lomo LC-A 120 fails of its one essential purpose. Its lens is actually excellent. When you think about wide-angle lenses for 6×6 and up, the 38mm f/4.5 Minigon XL is quite wide. I use a 35 APO-Grandagon on a Horseman SW612, so I have some pretty developed ideas both about what is wide and what is good.

The spoiler alert here is that the LC-A 120 is a combination of a phenomenal lens with what might qualify as the worst $450 camera. In the history of ever. Not the G.O.A.T. but an actual goat.

Lens. Let’s start with the 38/4.5 XL. It is not a real XL like a Schneider 38mm; this barely covers 6×6 at anything but the smallest apertures. But it does have a couple of principal virtues when you shoot it with TMY: it has virtually no barrel distortion and is sharp from edge to edge when stopped way down. You almost have to wonder if this is an Arsat PC lens repurposed into a medium format one.

With black-and-white film, one comment on lateral color shift, which seems to be what gives Lomo pictures their unique “color.” That and film that is way past its color prime.

Click on the picture below and then scan from side to side. Yes, it’s scanned on a Flextight and straightened slightly. But holy frijoles, it looks a lot like a $2k lens on a pano camera (granted, such a lens would cover a frame a lot larger than 55×55).

Focus. Focus is a bit more problematic, having steps of 0.6m, 1m, 2.5m, and ∞. The focusing lever snaps from position to position with a non-reassuring plastic “pop,” does not exactly match the marks, and stays put(!) when you slide the lens cover (and focusing scale!) upward to close the camera. The difficulty of zone focusing when you don’t know the shooting aperture is an unknown margin of error. A 38mm lens on medium format does not exhibit pan focus except at very small apertures. I did test operation with a Contameter external rangefinder (the late plastic one that actually goes to infinity), but if you drop four hundred and fifty on a camera and another hundred on a rangefinder, you might as well buy a Fuji GA645w.

Exposure controls. The original LC-A was zone-focused and aperture priority. With that setup, at least you know what will be in focus. The LC-A 120 has fixed program exposure that only has one combination of shutter speed and aperture for any EV. The nominal spec is “unlimited” time to 1/500 second, but it’s unclear whether the stopping down is linear to the light level or not. You would think that on a camera like this, you might want to keep the shutter speed low to keep the aperture small. Sometimes the unintentional shallow depth of field works:

You effectively can apply exposure compensation (important when using Diafine) by changing the star-shaped ISO dial on the front.

Viewfinder. The viewfinder is clean and clear. And plastic. And lacking any horizontal or vertical reference marks that would tell you if the camera is level (or square to objects in the picture). This would make architectural photography difficult absent either a tripod and level – or a shoe-mount electronic leveling device. On a half-press of the shutter button, one light means the camera is reading and two means underexposure. Coverage looks like it is about 90%.

Shutter. The shutter operation in the camera is like a press shutter – pressing the button cocks and fires. If you engage the MX switch, you can repeatedly make exposures onto the same piece of films. You can even do it by accident, like this:

You will actually need the MX button for those situations where you mostly press the shutter (releasing the wind and locking the button) but don’t actually take the shot.

Flash. Flash is actually a place where aperture control is important. Lomo has no explanation for how you should use flash except that you should set your automatic flash for 4.5 (as if any automatic flash doesn’t just jump from 4 to 5.6). Shooting with flash does not trigger a short synch speed; everything is essentially rear-curtain.

Build quality. Burying the lede, or not. It is terrible. Horrible. The camera body is plastic. It’s not flexible, but it has all the charm of the pebbled plastic around the back seat of a family sedan. The camera back compensates for its lack of sophistication with wide foam seals.

The film tensioning leaf springs (note to Lomo: thank you for including these, unlike the foam blocks in the Belair) are attached to the film gate, which popped out of the camera the first time I tried to load it. The film gate has two significant (and apparently intentional) light leaks at its upper corners. Oddly, these were not plugged with foam seals. They should be.

Loading is not easy. You need to release the hubs with little switches. Pull the hubs down to release the spools. When you install a spool, at least theoretically, as long as the ramped portion of the hub is facing you, it should be possible to snap the film in. It’s not that easy. This seems like another place where a simpler mechanism (like a metal hub on a leaf spring) would work better and make people happier.

The frame counter does not depend on the movement of the film, just the movement of the takeup spool. Many LC-A 120 users seem to get fewer than 12 pictures on a roll. Presumably this is the product of fat-rolling the film, worsened by the imprecise frame counting that does not compensate for thicker films and backing papers.

I was able to nail it by putting the start mark of TMY right at the right “edge” of the lower-left film guide (i.e., halfway to the camera’s own start mark). I was lucky. Twelve frames took you to within 1cm of either end of a 120 film. Frame counting would better have been left to a red window here. At least the framing would be consistent.

But where from here? The heartbreak of this camera (if you can call a feeling about an inanimate object such) is that like the Lomo Belair 6×12, the camera started with some good bones and a great concept and was executed terribly. The Belair had bad light leaks and poor focus but decent lenses an an automatic shutter. Looks like Lomo landed in the same place here: great lens, functional autoexposure system, rickety everything else.

Maybe the fault is that the lens suggests the camera is better than it is. Maybe I just received an unusually good copy. Maybe my expectations were unrealistic.

You might think for a hot minute about remounting the lens, but when you add up the cost of a (controllable) Copal shutter and a focusing mechanism, plus whatever you are attaching it to, it’s far too much money. It’s also unclear how this lens is mounted in the camera – you might have to replicate a fair amount of the physical setup of the Lomo to make it work. Two of these lenses in a twin-lens setup? That would be neat, but you’d probably be close to the price of a bargain bin Rollei when you finished with it. Well, it was a nice thought, anyway.

Cameras like this are bought by fools like me /
But only F&H can make a Rollei.