Archive | Images – Postprocessing RSS for this section

Venus Laowa 15mm f/4.5 Zero-D Shift W-Dreamer FFS Review

Months after it came out, there is nothing really out there on this lens except for some stock factory pictures and writeups that are predominantly plagiarisms of promotional materials, “why don’t you get to the point” videos, and vapid clickbait reposts of said videos. There are a couple of decent reviews, but I don’t feel like they were really pushing the lens.

So in the Machine Planet tradition of going off half-cocked, I will give you the dirt on this after spending a day shooting the Nikon F version of this in -3º C weather with a Leica Monochrom Typ 246. No need to start simple, or even with the camera body on which this lens (ostensibly) was intended to mount.

A Typ 246 is an all-monochrome, FX, 24mp Leica mirrorless body that can shoot to 50,000 ISO without looking even as grainy as Tri-X. It has a short flange distance, which means that virtually any SLR lens can be adapted to it. It has pattern, off-the-sensor metering, so there is no messing around with exposure compensation or trying to figure out why shift lenses underexpose on Nikon F100s and overexpose somewhat on the F4 (yes, this is true). It also has an inbuilt 2-axis level that you can see in its EVF, a welcome aid when it is cold outside. These features mean that you can use a shift lens handheld. This lens is a ~22mm equivalent on APS-C (DX) nd I believe a ~30mm equivalent on Micro Four-Thirds. This probably is not a lens for MFT, since it is absolutely massive on any MFT body. In fact, it seems really big for a Sony Alpha body…

The physical plant

The first thing you ask yourself about this lens is, “how could a lens out of China possibly cost $1,199?” But this is a shallow (if not also culturally chauvinistic) observation. Your iPhone is made in China, and there is nothing wrong with its lenses. Or apparently, you iPhone’s price. Venus is something of a newcomer in the camera lens market, and it uses the designator “Laowa,” which is a reference to frogs in a well (not kidding… check out the Facebook page). The idea, they say, is to look up at the sky and keep dreaming. That, of course, is possible where the cost of manufacturing a zillion-element, double-aspherical lens is relatively low. The front ring reads “FF S 15mm F4.5 W-Dreamer No. xxxx.” FFS of course stands for “Full-Frame Shift.”

The 15/4.5 lens is available in a variety of mounts. Word to the wise: get the Nikon F or Canon EF version. Nikon has the longest flange-to-focal distance at 46.5mm, meaning that it has the shortest rear barrel, meaning the maximum compatibility with mount adapters (with simple adapters you can go from Nikon to any mirrorless camera, including Fuji GFX). Canon EF is a close second at 44mm. If you have an existing Canon or Nikon system, just take your pick. Your worst choice is buying this lens in a mirrorless version (Canon RF, Nikon Z, or Sony FE), since you will end up locked into one platform exclusively. Remember that this lens has no electronics or couplings, so adapting it is just a matter of tubes.

The lens comes packed in a very workmanlike white box, just like $50 Neewer wide-aperture lenses for Sony E cameras. This is a mild surprise, but nobody maintains an interest in packaging for very long after a lens comes in. Nikon lenses, after all, come in pulpboard packaging that strongly resembles the egg cartons your kids might give to their hamsters as chew toys. The instructions end with the wisdom, “New Idea. New Fun.” And that is very on-point: for most people, photography is about fun.

The lens is a monster, and it’s not lightweight. It feels at home one something at least the size of a Nikon F4 (and balances well on one, btw). On an M camera, you need to employ the Leica Multifunction Grip (or something similar) to effectively hold onto the camera (this combo can still break your wrist…). Weight as ready-to-mount on a Leica is 740g. For comparison, a Summilux 75 (the original gangster heavyweight for M bodies) is 634g. An 18/3.5 Zeiss ZM Distagon is 351g.

The front element is bulbous. And you must remember that you cannot simply set the camera nose down, since (1) the glass sticks out, (2) there is no filter protecting it from damage from the surface the lens rests on, and (3) this is a really expensive lens. This is also a lens whose lens cap you cannot, must not, ever, lose. It is solid, pretty, bayonets on, and probably can’t be replaced. It is not clear why – if you can mount a 100mm filter holder to the front of this lens – that such a holder is not simply built into the lens – if for no other reason than protecting the front element.

I mounted mine with a Novoflex LEM/NIK adapter, which is pretty much the only dimensionally accurate anything-to-M adapter. Proper registration is a big deal because a 15mm lens cell has very little travel from zero to infinity.

The Novoflex’s stepped interior suggests a place to stick a filter — since the lens has no front filter threads — but for reasons discussed below, this is not a big deal. And in the back of the lens, it’s gel filters – or nothing.

Controls/handling

First, this lens is easy to handle wearing gloves. Which, given the temperature yesterday, was fortunate.

This is a little bit different from a traditional PC lens, on which turning a knob would make the shift. The Venus has a third lens ring – behind the focus (front) and aperture (middle). This is different from a Nikon PC lens, for example, where the aperture is front and focus is rear.

The shift ring cams the lens back and forth along the direction of shift, 11mm in either direction. You would think this would interfere with focusing or using the aperture ring, but in reality, it’s likely the only ring you would be moving on a shot-to-shot basis. This lens has such staggering depth of field that you will put this roughly on ∞ and forget about the rest, and you will probably turn it to f/8 and leave it there. Shift is locked with a knob that looks like the knob Nikon uses to shift the lens.

There is a small tab that locks the rotation of the shift mechanism, which can be set to 0 for horizontal pictures, 90 or 270 for verticals, and 180 if you are strange. It moves in 15-degree increments. A 28/3.5 PC-Nikkor does not have a lock, which occasionally can make things exciting if you start framing and realize that your shift is now 45 degrees from vertical (or horizontal).

The aperture ring has light clicks and is logrithmic (unfortunately) – each stop at the wide end is the roughly the same amount of movement, but things do bunch up at f/11, f/16, and f/22. It’s puzzling in this price range.

The focus ring has a short throw, infinity to 1m being about 1cm of travel. Set it and forget it. If you’re looking at pictures on the net and wondering why the focusing scale makes it look like the lens focuses “past” infinity, it’s a mystery.

  • At the hard stop and no shift, the lens is indeed focused at infinity. But the scale is off.
  • At the hard stop and shifted, the focus is still correct at infinity.

I verified optical focus at the stop both on a Nikon F4 with an adapted red-dot R screen (grid/split prism/f<3.5), the Nikon F4’s phase-detection AF sensor, and with the Leica.

To understand the strangeness of the Venus focusing ring, consider that in an old-school, manual focus lens, you typically have three things in synch for “infinity.”

i. The lens is at its physical stop, meaning you can’t turn the focusing ring to make the optical unit get closer to the imaging surface. This is normally an inbuilt limitation. It is not typically a critical tolerance on a lens due to the two adjustments below.

ii. The lens is optically focused at infinity, meaning that an infinitely distant object is in-focus on the imaging surface. This is usually a matter of shimming the optical unit or in some lenses or using a similar adjustment for forward/backward position of the optical unit.

iii. The focusing scale reads ∞. In the old days, this was simply a matter of undoing three setscrews, lining up the ∞ mark with the focus pointer, and then tightening the screws. If you are a super-precise operator like Leica, your lens stop/focusing ring/scale are made as one piece and so precisely that no separately applied focusing scale is required.

When a manufacturer of modern autofocus lenses (or even high-performance manual telephotos) is confronted with design constraints, it generally omits the relationship (i), the physical stop, and (iii) the infinity mark. It will do this on telephotos (like the 300/4.5 ED-IF Nikkor) because heat-related expansion might otherwise prevent a telephoto from actually focusing on a distant object. With AF lenses, hard stops are not the best for the fallback “hunting” mode — and with the user relying heavily on AF anyway, there is no need to inject another thing to check in QC. By the way, on a lot of AF lenses, the focus scale is basically just taped on – eliminating the setscrews.

Cheaper lenses, like the Neewer I-got-drunk-and-bought-it-on-Ebay specials, don’t really couple any of these things precisely. The stop is set so that you can optically focus past infinity and yet when the lens is optically focused at infinity, the focus scale might read somewhere between 10m and the left lobe of ∞.

For reasons that are frankly baffling, Venus uses a different idea entirely, which is to match the collimation and the stop – the hard part – and yet to omit matching the focusing scale. This provides no ascertainable benefit unless the focusing ring is not just a ring but an integral part of the focusing mechanism. I don’t see any setscrews, so maybe this is the explanation. And really, something in this price range should have things line up, even if it means adding one more cosmetic part to make the focusing scale adjustable.

On the surface, this design choice is frustrating to perfectionists and degrades the value of the focusing scale. That said, in 99% of pictures you take with this lens, you’re going to set it to the hard stop and get more than sufficient depth of field for close objects just by virtue of stopping the lens down.

If you are reading this, Venus, the focus scale design needs to be fixed.

Shooting

There was nothing remarkable about shooting this lens, which is a good thing. As long as you realize it has no electronic connections or mechanical control linkages to the camera it… works like any Leica lens.

They used to advise that PC lenses had to be used on tripods. That was true when (1) cameras did not have inbuilt electronic levels and most did not have grid focusing screens, (2) viewfinders blacked out at small apertures and with shift, and (3) through the lens meters freaked out at the vignetting.

None of those conditions exist with mirrorless cameras, where viewing is off the sensor, focusing is by peaking, and signal amplification makes it possible to frame a picture even closed down to f/16. On the Leica Monochrom, for example, it is very easy to use this lens – no different from using any other with the EVF. The M typ 240 series cameras have inbuilt levels that are visible through the EVF; the later M10s do too. A visible level is absolutely essential if you are going to shoot this (or any shift lens) handled.

Speaking of the sky, the sweep of this lens, its vignetting, and its self-polarization mean that in many pictures, the sky will be darker than you expect. Most people will not mind. I suppose you could mount a 100mm filter to the front or a gel in the back, but this is highly dependent on what you are trying to do, your tolerance for the expense, and the light response of your camera.

One thing you begin to realize is that if you switch from a 28mm PC lens to a monster 15mm PC lens, you go from shifting exclusively up to avoid converging parallels – to also shifting down to cut down on excessive sky. You might think of the shift as the “horizon control” adjustment. The challenge is, at the end of the day, that this is still a 15mm lens with a super-wide field. Unlike a 28 or 35, you need to think about both the top and the bottom of the picture.

One other thing you will see in a couple of the pictures in the article is that a slight forward tilt of the camera can make things look slightly bigger than they should at the top. This is user error and the unintended opposite of converging parallels.

With wide lenses, you need to watch 3 axes of alignment – left/right tilt, front/back tilt, and critically, parallelism to the subject. This last point can be a major irritation with this lens since cameras don’t typically have live indications of whether you are square with the subject.

Sharpness

Note: WordPress scales pictures down and not in a flattering way; if you want pixel-level sharpness comparisons to other lenses, there are other reviews out there that do that.

The jury is still out here – at least until I get a sunny day and hook this up to an A7r ii, which is more representative of cameras most people would use with this lens. But the foreman is asking some of the right questions for the verdict we want. Field curvature is also something that needs more exploration. As it stands, though, the lens seems to be more than sharp enough for its intended purpose.

All wide-angle lenses have degradation toward the edges of the frame. Many cameras don’t have the resolution to make it obvious, but this is a well-known reality. Shift lenses have a bigger image circle, which gives them comparable performance (not stellar, but comparable) performance to normal lenses over a wide area. They are “average,” but average in the sense that they are reasonably sharp over the whole frame, not super-sharp in the center and falling apart at the edges.

Put another way, a shift lens for 35mm is essentially a medium-format lens. Medium format lenses do not have the highest resolution – because they don’t have to. But they do deliver their performance over a wider field. But by shifting the lens, you are bringing lower-performing edges of the field into the 35mm frame.

But… you protest… my AIA book has all of these perfect architectural pictures of xyz buildings.

No, it does not. First, they are tiny, and that with the halftone screens, they give off an impression of being much sharper than they are in reality. Second, if you look at an original print closeup – pixel-peeping on prints was never normal when people made prints – you’ll see that the pointy top of that building is fuzzy because someone used a 4×5 or 8×10 camera and shifted it to accommodate the tall object in the picture. But seeing it in a gallery or an exhibit, you would (i) be standing back from it and (2) paying attention to the center of the frame, which is where most pictorial interest is. That pointy top is in your peripheral, not central, vision. The central part still has adequate performance for the purpose.

For this reason, the sharpness of a shift lens can only really be understood in terms of shift lenses or shifted medium- or large-format lenses: if you leave a little sky above a tall building, you don’t have to confront so much the inevitable performance falloff in those last couple of mm of the frame. All shift lenses have this issue, and it goes both to illumination and sharpness. Go to maximum shift on anything, and you can expect image degradation at a pixel-peeping level in the top third of the image.

So what? This is the same thing that people with shift-capable cameras have faced since… forever.

And why do shift lenses exist? The answer is pretty simple; it’s easier to get to a good result than many types of post-correction. If you plan to do post-correction, you have to use a much wider lens than you normally would, you have to crop (because tilting an image in post makes the field a trapezoid that must be rectified), and you have to have an accurate measurement of the scale of the original object. On this last point, if you don’t know the XY proportions of a building’s windows, perspective-correcting it in post-processing will result in awkward proportions. So if you have a 42mp image that needs serious correction to make a tall building upright and correctly proportioned, you may end up with less than 20mp of image by the time the process is over. And since tilting magnifies the top edge of the image, you are magnifying lens aberrations in the process.

Post-correcting does have one advantage, though, which is that you can use a lens that performs highly across the frame. I do it a bit with the Fujinon SWS 50mm f/5.6 on a 6×9 camera: when you are working with a wide lens, from a 96mp scan, you have plenty of resolution to burn in fixing one degree of inclination. This is not so much the case with a 35mm lens on a 35mm body.

As of this writing, Leica just announced in-camera tilt correction for its 40mp M series cameras. This is an idea long overdue, since the camera knows what lens is mounted (or can be told) and the inclinations at the time of the shot. s.

You don’t escape post-processing with shift lenses, particularly when you have to fix skew between the image plane and the subject (rotation around the vertical axis of your body). PC lenses also have distortion to contend with, and simple spherical distortion sometimes seems less simple when the “sphere” is in the top half of the frame. But the corrective action is far, far milder.

The complication with digital and shift lenses is diffraction. With a shift lens, you need a small aperture to even out the illumination and sharpness, but that small aperture cannot be smaller than the diffraction limit without degrading sharpness overall. That’s f/11 on a Leica M246 and roughly f/8 on a Leica M10 or a Sony A7r II or A7r III series camera.

A further complication with all shift optics is dust. Small apertures, smaller than f/5.6, tend to show dust on the sensor. Shift optics have at least one extra place for dust to get into the camera body (the interface where the shift mechanism slides the two halves of the frame).

Sharpness seems to peak at around f/8 on the Venus, which is not surprising. The sharpness itself is good as well as consistent until the very margins of a shifted frame; I did not need to turn on sharpening on Lightroom. As with all lenses, apparent sharpness is higher on closer objects – because their details are bigger in the image, pixel-level aberrations are not as apparent.

Distortion

The goal is “Zero-D(istortion).” The lens gets close – and better than most SLR lenses in this range, and certainly better than a lot of SLR PC lenses – but not completely distortion-free. Unshifted, it looks like a relatively mild +2 in Lightroom (the shot above is uncorrected except for slight horizon tilt). Shifted might be a little tougher to correct, but you can either create a preset for Lightroom or use some of the more advanced tools in Photoshop.

Flare

Yes. It has flare when light hits it wrong. Check out the picture above. Sometimes it works. Sometimes it is an irritation. Luckily, it does not seem to happen very often,

Value Proposition

There is a real tendency to abuse superwides in photography today, usually to disastrous effect due to the inability of photographers to properly compose pictures. Companies like Cosina/Voigtlander have fed into this, as has Venus, with about a dozen high-performing superwide lenses that would have seemed impossible just a few years ago. “Wide” used to mean 35mm; now “wide” tends to mean 24mm, and “superwide” is below 15mm. The Venus has all of the vices of a wide-angle lens, notably posing the question, “what do I do with all this foreground?”

By the same token, shift lenses are very specialized tools. Old-school shift lenses were the least automated lenses in their respective SLR lines; new ones are marginally more automated (mainly having automatic apertures), but they are staggeringly expensive.

The Venus somehow manages to combine the best and worst of all of this. You cannot argue with the optical performance as a shift lens, but the lack of automation (and frankly, ease of use) makes it just as miserable to use on a native SLR body as any old-school shift lens was. You’ll note where people complain about this lens in reviews, that’s what they complain about. I’m not sure that merits much sympathy; you know what you signed up for. What makes the Venus more fun is that it connects to mirrorless bodies that, by virtue of their EVFs, remove a lot of the irritation that would occur using the lens on a traditional SLR body.

Whether you will always be shooting 30-story buildings from 200m away is a matter of your own predilections, and that might be the deciding factor. Unless you are really good with wide-angle shots – or are a real-estate photographer in Hong Kong, you may not have a very solid (or at least somewhat economically viable) use case. But in reality, the market is not driven by professional needs. If it were, the only things that would ever be sold would be full-frame DSLRs, superfast 50mms, and the “most unique wedding I’ve ever seen” presets package for Lightroom.

Bottom Line

Pros: solid build quality, clever shift mechanism, wide angle of view,* reasonably low distortion, actually collimated correctly for its native mount.

Cons: non-linear aperture control,** odd (incorrect?) focus scale calibration,** facilitation of compositional errors you never previously imagined possible,* bulbous front element, no inbuilt filter capability, and a lens cap that only mounts one way.

*Qualities that would be inherent to any lens this wide with shift capability.

**Qualities that do not typically belong on lenses in this price range.

Adobe face recognition: beat the system?

The Kobayashi Maru test is not a test of character unless you see the world in terms of “go down in dignity with the [star]ship” or “be a coward.” Or whatever Nick Meyer thought the outcomes would be. Captain Kirk won the test by not accepting a binary decision tree. This is exactly how you should approach any problem that looks like it is unwinnable. Rewrite the simulation. Use a screwdriver as a chisel.

One of the ways you can do this is to ignore the process as presented completely, decide your goal state, and then selectively use whatever is available to get there. Face recognition is exactly such an exercise. Adobe would have you select one of two suboptimal tools (Lightroom Classic or Lightroom) and have you build out the recognition process and leave it in the platform where it started.

Not believing in the no-win scen-ah-ri-o (sorry, Shatner), I started with the first principle:

What is the purpose of face recognition in photos?

This is actually a really good question. The way the process on Lightroom proceeds (either version), you think the purpose is to name every person in every photo and know what precise face goes with every name. This view assumes that you are a photojournalist who needs to capture stuff. You will go bat crazy trying to achieve this goal if your back catalog is hundreds of thousands of pictures and you use Lightroom Classic (“Classic”) as your primary tool.

Let’s face it – you are (at least this year) a work-at-home salary man, not Gene Capa. The real utility of face recognition is to pull up all pictures of someone you actually care about. You need it for a funeral. For a birthday party. For blackmail.

That does not actually require you to identify precise faces, just to know that one face in the picture is the one you want. You already know this person’s name and how the person looks. And even if you didn’t remember, a collection of pictures of that person – no matter who else was in or out of the shot – would have one subject in common. You would know within a few pictures who John Smith was.

Taking this view, a face identification is just another keyword.

It’s not even 100% clear that you would ever need it done in advance, on spec, or before you had a real need to use it.

What do we know about face recognition in LrC vs LR?

Our statement of problem: 250,000 images of various people, some memorable and some not. I want to get to being able to pull up all pictures of John, Joe, Jane, or Bill. And I want this capability to last longer than my patience with Lightroom cloud. I want to be able to ditch Lightroom, even Classic, one day and change platforms without losing my work.

When you are figuring out a work flow, or trying to, it’s helpful to consider what your tools can and cannot do; hence, with Classic and Cloud, start breaking down the capabilities.

  • Both recognize faces with rudimentary training.
  • Cloud is much faster than Classic and tends to have fewer false hits (due to Sensei)
  • Both can do face recognition within a subset of photos.
  • Classic can an apply keywords to images that Cloud can see/
  • Cloud cannot create keywords that Classic can see.
  • LrC has better keyword capabilities, period.
  • You can make an album in Cloud and have it (and its contents) show up as a collection in Classic.
  • You can put things in one of these items in either program and have it show up in the other.

Do these suggest anything? No? Let’s step through.

Step-by-Step

Let’s talk about some preliminaries that no one ever seems to address.

Order of operations. If you are starting from zero, you should identify faces in the import every time you import something. Not only are names of near-strangers fresher in your mind, it also prevents the kind of effort we are about to explore.

What’s my name? You must have a naming convention and a normalized list of names. It doesn’t matter whether you pick someone’s nickname, real name, married name, whatever. Whatever you decide for a person must be treated consistently. Is my name Machine Planet? Planet Machine? PlanetMachine? This has implications for Classic, where you can’t simply type a two-word name (Bill Jones) into the text search box without getting everyone named Bill and everyone named Jones. For that you might want to concatenate both names together (unless you want to use keywords in the hierarchical filters). In Cloud, the program can sort by first and last name, so there is value in leaving these separate.

Stay in the moment. Although you might be tempted to run learning against every single picture you have at once, this leads to a congested Faces view (or People view), slow recalculation on Classic and a lot of frustration. Do a day or a week at a time. Or an event. This will give you far fewer faces from which to choose, and fewer faces to identify. Likewise, if there is a large group picture in the set, focus your effort on tagging everyone in it. This will set up any additional Identified People in Classic and will kickstart Cloud.

Who’s your friend? You next need to decide who is worth doing a lot of work to ID. You are not going to do iterative identification (especially on Classic) with people you don’t care about. Leave their faces unidentified. Or better yet, delete the face zones. This is a very small amount of effort in a 200-shot session or a 36-shot roll of scanned pictures.

Start in Cloud. This part is not intuitive at all. Go ahead and sync (do not migrate!) all your pictures to Lightroom mobile. This consumes no storage space on the Adobe plan. If there are a lot that have no humans, use a program like Excire Search to detect pictures with at least one face pointed at the camera. This is a reasonable cut, since there are few pictures you would bother tagging that have one face, solely in profile.

The synch process will take forever. I don’t think there is a lot of point in preserving the Classic folder structure when you do this; I would just make a collection like “Color 2000-2010” in the Classic synched collections and dump your targets into that (n.b. a collection in Classic is just an alias to your pictures; making a collection does not change the folder arrangement on your computer). We are only using Cloud for face recognition; its foldering is too rudimentary and inflexible to be useful – although right-clicking in Classic to make folders (or groups of folders) into synched folders will let you adopt the Classic organization in Cloud, albeit flattened, without re-synching. Again, not very useful. Also, for reasons described further on, you want to have a relatively clean folder panel in Cloud because you will be making some albums, and you don’t need extra clutter.

Ok. Let the synch run its course, or start your identification work on Cloud as it goes. Cloud will start aggregating what it thinks is the same face into face groups, which you then must name. Start naming these according to the convention you chose. I would put the People view to sort by “count,” which naturally puts the most important people at the top (you have the most pictures of them). Let’s say you name one face group “John Smith.”

Crossing over

The process so far is pretty generic. To start crossing things over to Classic, you need to make folders (“albums”) in Cloud. Start with one per important person (“___ John Smith”). Search for that person. Dump the search results into the album. You can always add more later.

Now flip back to Classic. You will see collections under “From Lightroom.” Voilà! One of them is “John Smith.”

Now you can do one of two things.

You can simply make a quick check to make sure there are no pictures included that obviously are not John Smith. But after you do that, or not, you can mass-keyword everything in that collection “John Smith.” If you named John Smith consistently with any pre-existing Classic face identification of John Smith (i.e., not two different variations of the name), your searches will now have the benefit of both tools. Save those keywords down to the JPG/TIFF files (Control-S/Command-S) or XML files (same), and you will forever have them, regardless of whether you leave the Adobe infrastructure. In fact, many computer-level file indexes can find JPGs and TIFFs by embedded keywords (which the index sees as text).

Congratulations. Now you’ve highjacked Sensei into doing the dirty work on Classic.

With a small but not overwhelming amount of creativity, you could use a technique like this to cross-check your past Classic calls.

STOP HERE AND GO TO “CALIBRATING YOUR EFFORTS” UNLESS YOU ARE A MASOCHIST

Second, if you’ve missed your OCD meds, you can also use the results of this to inform your Classic face-recognition process.

a. Select this “From Lightroom–>John Smith” collection and flip to Faces view in Classic.

You are now seeing all “Named People” and all “Unnamed People.” Unnamed people are shown by who Photoshop thinks they are most likely to be. You can sort Unnamed people in various ways, but however you do it, you want to get John Smith?s in a contiguous section where you can then confirm or X out. By going into this in the From Lightroom–>John Smith collection, you are not waiting for recalculations against every photo you have – just the ones that Sensei thought should have John Smith.

So the cool trick is this: if you see 106 pictures in From Lightroom–>John Smith, then you know you are probably going to be done when you have 106 confirmed pictures of him. Or done enough. John can only appear in a picture once. There will be a margin of error due to how closely Classic can approximate Sensei, but you can get to about 90% of the Sensei results without a lot of trouble. This is a bit better than Classic on its own, where more pictures of John Smith at an earlier age might be really buried down in the near matches. Further, Classic is something of a black hole for similar pictures because unlike Cloud with Sensei, there is no minimum required similarity score to be a suspected match.

b. You can, of course, drill down on John Smith as a Named Person. You don’t have much control over how “Similar” pictures are ordered (I believe it is degree of match for the face), but here, you can confirm a much more concentrated set (after you decide how to deal with the “fliers” who are not John Smith).

One other technique I have developed while in the “confirming” stage is that it may be easier to confirm en masse (even if some are wrong) down to the point where the “not John Smiths” are about a third of the results in a row of Similar faces. A small number of “fliers” can be removed by going up to the Confirmed pictures, selecting them, and hitting delete. Trying to select huge swaths of unconfirmed faces in Similar and then unselecting scattered fliers tends to really slow things down. As in Classic really slows down as it tries to read metadata from everything you selected.

Incidentally including and then manually removing a few fliers from Confirmed does not seem to affect accuracy (because every recomputation of similarity is on the then-current set of Confirmed faces – changing that set changes the computation). If you have 99 pictures that are right and one that is wrong, it won’t even change the accuracy appreciably. If in Confirmed, you have 995 pictures that are John Smith and 5 that are not, again, the bigger set of correct ones will predominate future calculations.

Next, at some point, especially with siblings, Classic is going to reach a point where Jane Smith (John’s Sister) is going to show up as a lot of the “Similars” with John Smith. When this happens, go back to Faces (top level, always within From Lightroom–>John Smith), click on her, and confirm a bunch of her pictures. When you go back to Named Person John Smith, a lot of the noise will be gone, and hopefully more John Smiths will be visible in a concentrated set you can bulk-confirm.

Crossing back (optional)

I did write “iterate,” right? You might want to keep your Cloud face IDs as complete as possible, since there is not 100% correspondence between results from the methods used by the two platforms. This is relevant if you have already trained Classic on John Smith.

  1. In Classic, note the count in your From Lightroom–>John Smith collection. Say it’s 106 pictures.
  2. Do a search from your Classic Library for all pictures of John Smith. If you used a space in the name, add Keywords to the field chooser menus (via preferences) and select that line.
  3. Drag all of those results to From Lightroom–>John Smith.
  4. Flip to Cloud. They are now in that “John Smith” album. Or they will be when it synchs.
  5. Select all the pictures in the “John Smith” album.
  6. Hit Control-K (or Command-K) to bring up keywords and detected/recognized faces in the “John Smith” album.
  7. Now name any faces that are blanks – but should be John Smith.
  8. Now from the All Pictures view, search for John Smith and drag all his pictures to the John Smith album.
  9. In Classic, check your count. If it’s say 128 pictures, now, that means that Cloud took your examples and found more John Smiths. And now they are ID’ed in Classic as well.
  10. Switch to Faces and confirm the 22 additional faces as John Smith. Now both systems have identical results.

Calibrating your efforts

For searches for random people, Cloud is still the best because it requires very little training. That said, for randos, you are using a tool that does not give you any permanent results. That’s probably ok for people who you don’t really care about. Or if you plan to be on Cloud forever.

For close friends and family, you may just run the “Crossing Over” exercise. I would do it in groups: do a bunch of albums on Cloud (say seven people), then do a bunch of naming on Classic (their collections), etc.

If you are really a neat-freak or compulsive, you could use the “Crossing back” step. But Sensei is reasonably good at what it does, so the marginal effect of adding Classic results to Sensei may not be much. If you have Excire, you might use it to find pictures that look like a picture of John Smith, which will give you a third means of concurrence.

The thing to remember about face recognition is that it is miraculous but also imperfect. It has to detect a face and then it has to identify a face. It doesn’t see how you see. Efficiency works at cross-purposes to accuracy.

But it is still vastly better than trying all of this on your own.

Face-off: Apple vs. Adobe face recognition

So here is a question: what’s the best way to catalogue and tag your pictures? Is it Lightroom Classic? Lightroom Cloud? Is it Apple Photos? Is it something else? Maybe it’s a lot of things. If you are a high-volume imaging-type person, you’ve probably wondered how to deal with things like tagging people. The most macabre application, of course, is the funeral collage. But say you have tens of thousands of pictures of family members and want to print a chronological photo album. Then what? Face recognition features in software may be your best bet. From a time standpoint, they may be your only choice. The problem is that different software has different competencies.

Apple Photos

Something like Photos is designed to group pictures, more or less automatically, around people, events, dates, or geography. Think of it as your iPhone application on steroids. Photos is not big on user control. It is not even engineered to do anything with folders except display them if that’s how photos were imported.

Face recognition in Photos is incremental and behind the scenes: it only finds faces when you are not actively using the program, and over time, it batches up groups of pictures which you confirm or deny as a named person in your Faces collection. To establish your Faces collection, you have to put names on faces in a frame where faces have been detected. This tends to mean that face recognition proceeds by which faces the user thinks are most important. As it should be.

Unlike Lightroom, Photos does not presume that detected faces are unique. It applies a threshold such that if it detects Faces A, B, C, and D, and they are close enough, they are treated as the same (unnamed) person. As such, naming one person can have the unintended effect of tagging a bunch of false matches. Either way, you can error correct by right-clicking the ones you see that are wrong.

My assessment of Photos is that it is not suitable as a face-recognition tool if you have hundreds of thousands of images, for several reasons:

  • Its catalogs are gigantic, even if you use “referenced” images. Photos loves it some big previews, no matter what you do. For scale, my referenced Photos library is 250gb where my entire Lightroom Classic library folder is 40gb (both excluding original image files – so Photos sucks up 6x the space).
  • The face recognition process appears to be mostly (if not completely local), it runs in spare processor cycles, and in my experience, can cause kernel panic. Hand-in-hand with this is the fact that you can never actually turn Photos off. It’s part of MacOS.
  • There does not appear to be any indication that Photos actually writes metadata to files. So when you move to a new application, you’re starting from zero.
  • You can’t really use it in conjunction with a grown-up asset management system like Lightroom.

Photos is, however, good for generating hilariously off-base collections of photos (memories) with weird auto-generated titles (“Celebrate good times” with a crying baby as the cover photo). Or collections based on the date a bunch of pictures taken over decades were scanned (such as my 42,600 pictures apparently taken on December 12, 2008). I actually have no idea where these are generated. But they are funny.

I’m sure Photos is really good for those funeral collages, though.

Lightroom Classic (LrC)

Something like Lightroom Classic (LrC) is designed around manipulating, filtering, and outputting large numbers of pictures at once. This is, indeed, the killer app for handling large volumes of photos, and becomes a single interface for everything. It’s OK, but not great, for face recognition.

To put it mildly, LrC’s face-recognition is processor- and disk-intensive. The best way to use it is to use it on a few hundred photos at a time so that your identifications don’t swamp everything in your collection in a recalculation. LrC is good at showing you different faces all at once, as single images, so you can get cracking on identifying as many new “people” as you have patience for in one sitting.

The top level of the Faces module shows you (i) “Named People” and (ii) “Unnamed People.” You need to name at least one “Unnamed” person to start. After a while, the system will try to start putting names on “Unnamed” people. If you have a Named person named “John Doe” and are presented with an image that is “John Doe?” you can click the check box to confirm it and the X box to remove the suggestion (clicking again removes the detected face zone, such as if the system mistook a 1970s stereo for someone’s face).

Once you have done that, you can drill down on a “Named” person to see what pictures are “Confirmed” and what pictures are “Similar.” Again, to move from Similar to Confirmed requires an affirmative call. Here, you only get a check box. There is no “Not John Doe” option, which means that every possible match is shown, ranked in what LrC thinks is similarity. This is actually problematic because as you confirm more pictures, the number of “Similar” pictures rises exponentially. This puts a huge computational drag on things.

Wherever it happens, confirmation of a face’s identity is an affirmative process that is repeated for each picture (you can select several). This prevents false IDs based on grouping disparate real people into one “face,” but it also makes tagging excruciatingly repetitive. And slow. Highlighting faces to group-confirm or identify can have the “highlight” lagging far after your click. And God help you if you click six pictures and then try to type a name into one to rename all six. It works about half the time. The other half, it auto-completes with a totally unintended name. If you accidentally confirm the wrong face for a given name, you can highlight the errant thumbnail and hit Delete (this is not well documented).

Critically, the top level of the Faces module (where you see all named people as thumbnails) is the only place where the system puts a “most likely name” on unnamed people. Otherwise, looking at any particular “Named Person,” the same person – Bob – might show up as a similar for John Doe. And when you switch to Richard Roe, Bob will show up as a “similar” for him as well. This is part of the reason why people for whom you have 10 actual pictures always show up with 20,000 “similars.”

A big advantage of LrC over other solutions is that you can see and tag faces within specific folders, collections, or filmstrips. This lets you make context-sensitive decisions about who is who. For example, I am pretty sure that my kids did not exist in the 1970s. Or I might know that only 6 people are represented on a single roll of film that constitutes a folder in my library.

When a name is confirmed on a picture, that name is written as a keyword to the metadata in the library. It appears that XMP files (if you chose that option for RAW files) are written with the actual coordinates of faces in the picture, which allows some recovery if you have to rebuild a library from scratch. The important thing is that a picture is keyworded with the right names. Face zones are nice but not quite as critical in the long run because in reality, you only really care whether a picture contains John Doe or Richard Roe, not which one is which in a picture of both.

Always save your metadata to files if working with TIFFs/JPEGs/scans (Command+S) or “always write XMP” with RAW camera files. This helps keep your options open if you want to get divorced from Adobe. Or if your Lightroom library goes wheels-up and you have to rebuild from zero. There is no explanation for why this program just doesn’t write an XMP for every file. It would make things easier.

Lightroom [CC or “cloud”]

What a hot mess. The only thing that really works about Lr CC is face recognition. The rest of it is a flashy, underpowered toy that despite being “cloud” based can still consume massive amounts of hard drive space and processing power. If your photos are in the Adobe cloud, or synched from LrC, the program works with smart previews.

Adobe’s Sensei technology is a frighteningly good face-recognition system. In the People view (mutually exclusive with the Folders view), it takes all of your photos and groups them according to what it thinks is the same face (like Apple Photos). Put a name on that face, and it might ask you if this other stack over here is the same face. It is extremely fast (because it runs in the cloud). Sensei can also identify objects, and to some degree, places in photos. Naturally, the most important people in your life have the highest counts, and you can sort unnamed faces by count and work your way down. Things break down when 400 people have 15 pictures apiece, though…

The system, though, has some amazing limitations that are pretty clearly engineered in by a company that is trying to move everyone to its walled garden. Two of these four bear directly on the issue of why a hard drive – and keeping your own metadata local – is your ladder out of that walled garden.

First, metadata transfers to Lr are one-way. The program can absorb keywords applied in LrC, but not recognized faces/zones, and nothing you input in Lr can ever rain down on LrC. There is no programming-related reason that prevents metadata from flowing the other way, aside from intentionally engineering this out of being possible — so that you are eventually forced to store all your stuff in Adobe’s per-month-subscription storage space. Because paying a monthly to use programs that aren’t really being updated – like LrC – was not bad enough.

Second, you cannot force face recognition on arbitrary subsets of your library, at least very efficiently or intuitively. If you came at this program assuming that it would be like LrC, you would conclude that there is no way to do this. Instead, you have to select a group of pictures and hit Command/Control-K (for “keyword” – how intuitive…) to see the faces present in the picture or group. Lr then shows you the single picture with the face boxes – and the collection of faces in the picture on the right panel. This is great – but why is it so hard to find? You also get the impression that when you do this, the face boxes are generated on the fly. But the critical defect here is that the “named faces” that are shown as thumbnails are even smaller than the other face thumbnails in Lr.

Third, when asked to “consolidate” two faces, there is no way to flip between the two collections. This is an oversight – you are not asked to name a person based on one photo, but for some reason you are asked to make a consolidation decision that could have catastrophic consequences — based on a single fuzzy thumbnail. If in doubt, sit it out.

Finally, you can’t push face recognition data back down to LrC. So if you use LrC, you basically end up with completely separate face-recognition data sets based on the same photos. This is a big-time fail.

Upshot

Well, in terms of applications you can access for a Mac right now, the options are ok – but not great. Stay tuned for Part 2, in which we look at a way to leverage LrC and LR CC against each other to speed things up.

Configuring an iMac Retina 5K for photo editing: tips

imac-retina-step1-hero-2014

If you have been clinging to an older Mac Pro and are looking at potential upgrades, here are some notes on the iMac Retina 5K that might help you understand what to expect and what to order.

Processors. If you have been sitting on an older Mac Pro, you will simply want to go for the 4-core i7 at its maximum speed. The speed of photo editing software is much more dependent on simple clock speed than multithreading, and for this reason, the 4 x i7 iMac is probably going to be a better deal than the 2013+ Mac Pro. Let’s cut to the chase: the clock speed helps with Lightroom and Photoshop, and Adobe’s fear of multithreading means that you will want to go for the highest gigahertz figure. In addition, you cannot upgrade the processor later, so it is better to spend the extra $300 now.

Memory. The best configuration out of the box is 16Gb. This uses two slots and gives you the dual-channel speed you are paying for. Then buy two more 8Gb modules for $150. Then you are done forever. Hint: a child’s suction rattle is an excellent tool for removing the memory hatch, which is held in place by many spring clips.

Graphics unit. Don’t screw around on this part. A 5K screen requires a lot of capacity. Get 4 Gb of video RAM. This is another feature that cannot be upgraded later.

Screen: the most compelling feature about the iMac for photo editing is the 27″ wide, 5,000 pixel-width screen. It is like the Retina screen on an iPhone – just radically larger. The glossy finish helps blacken blacks (though it does sometimes show reflections). There are two effects of using a screen with this resolution, First, image files viewed 1:1 have an amazing clarity that makes it look like you are looking at the scene live – or looking at a good print. Second, you will need to look at many files at 2:1 to see what is actually going on in terms of sharpness, noise, etc. The screen on the 5K cannot be used as a secondary display for another Macintosh (and for good reason – they just don’t have the muscle to drive it). The system scales programs that are not optimized for 5K and manages to make everything work quite well.

Storage: unlike your Mac Pro, which could stash 12Tb on four internal drives (or a startup drive plus 3 drives making up a RAID 5), the iMac basically has two slots for storage. One takes PCIe flash memory; the other takes a 3.5″ desktop hard drive (or 2.5″ SSD with adapter). If you order the Fusion drive, you get a 128Gb card in the first and a 1-3Tb drive in the second. The two drives are linked as one logical volume via MacOS If you order straight flash memory, you get a flash drive that is 2x-4x the size (512Gb or 1Tb) and nothing in the HD slot (in fact, you don’t even get the connection cables). The problem with both of these arrangements is that PCIe memory wears out faster than hard drives, and the Fusion drive presents two independent paths to drive failure. Further, you are not really supposed to store your documents on the same flash/SSD drive as the startup disk and applications. All of this points to some kind of external storage solution. Consider using three drives:

  • Startup drive – this is the one in the machine. This should be an SSD, no question. Startup is 10 seconds; applications load and run immediately. This should contain a skeletal admin account so that you can start up the machine without any external drives if something goes awry.
  • User directories (really, documents). For reasons related to SSD wear and tear and general contention for resources, your user files should be an external drive – and preferably a bus-powered SSD. The bus-powered part is so that it can piggyback on a UPS serving the computer; the SSD part is so that it runs really, really fast (this makes a big difference with Lightroom’s Camera Raw cache and Mac mail). For backups, plan to clone the principal, mostly-static parts of your user account to the startup drive (or even documents other than space-intensive photo, video, and media files). The Library is the big thing you need, and you can exclude from the cloning files like web caches that change. Your main user directory should also be backed up via NAS to another device. The solid choice for this is the LaCie Rugged Thunderbolt SSD. If you look on Ebay, you can grab the 512Gb SSD unit for about $300, which is a steal, since it hits 400Mb/second through its Thunderbolt interface. You can get a Pegasus J2, but they are not nearly as fast.
  • Mass Storage Option 1: main storage on RAID 5. If you are doing a ton of photo work, you are going to need some large, fast drives. You will also want them to be reliable. The conventional solution is to use a RAID 5 system, which stripes data across a number of drives and records sufficient parity information to reconstruct a missing drive. Although this is more reliable, it is no substitute for a backup. When a drive fails, it can take many hours (or even days) for the missing data to be reconstructed. A second drive failure in the meantime generally means that you’re toast. And the total failure of the file system will wipe out everything on there. Consider instead the two-drive LaCie 2Big Thunderbolt 2 in the 6Tb size – in the striped mode, it runs in the 300Mb/second range for reads and writes. There are some even faster hard-drive-based units, like the Pegasus and the LaCie 5Big Thunderbolt 2, but these are much larger units that are designed for real-time video editing. They are also 4- and 5-drive budget-breakers, at $1,000 and up.
  • Mass Storage Option 2: main storage on RAID 0; backup on a NAS. Currently, Thunderbolt runs much faster than the fastest hard drive, so RAID o (pure striping) solutions are generally the best way to take advantage of some of the speed. The difficulty is that in a simple striped set, the failure of one drive takes everything down – and there is no way to upgrade capacity, The failure mode can be addressed by keeping cloned (or Time Machine) backups. In terms of capacity, you have to offload everything and then put it back on – but if you do that, you will already have fresh copies of your data on the off-loaded drives and backup of the machine on NAS. For the backup, I went with the LaCie 5Big NAS Pro diskless, which like the Synology and Drobo competitors has an intelligent RAID selector (SimplyRaid, a rebranded Seagate system) that allows you to incrementally expand the system by replacing one drive at a time. This is a big deal, since to expand a straight RAID 5 system, you have to offload all the data and then reload it onto the new array. This is why you should not buy a unit like the 5Big Network 2 – which in addition to being much slower, does not have the same expansion possibilities. The 5Big NAS Pro can also crank 60-90Mb/second on a gigabit ethernet line, which is an important thing to consider when you are running big backups over a LAN.
  • Incidentals – for dead storage or using up spare desktop HDs, check out the Sabio DM4LH Smart Raid 4-bay USB 2.0/eSata enclosures (RAID 0, 1, 0+1, 5, JBOD, Span). If you are sticking your Mac Pro in storage, you can yank out its 3.5″ drives and drop them in these well-designed enclosures and access them in JBOD mode. While the discontinued USB 2.0 version of this unit is not blazingly fast for massive transfers, you can get it for about $50 on Ebay and Amazon. Or you could plug its much faster eSATA connector into something like an Akitio Thunderdock. For more regular access, the USB 3.0 or Thunderbolt versions will be better. For single drives, MacAlly makes an enclosure that costs $75, looks like a mini Mac Pro (as if it’s a canopic jar…), and sports USB 3.0, Firewire 800, and eSata. It runs about $100. The version that has USB 3.0 and eSATA only runs $50.

Expansion: the typical silver Mac Pro has vast expandability, typically with (5) built-in USB 2.0 ports, (2) Firewire 800 ports, and (1) Firewire 400 port. It also has 3 open PCIe slots each of which can accommodate a card with up to four additional USB or Firewire ports or an eSATA bus. When you consider that the box itself holds 4 hard drives and 2 optical drives, the number of storage devices that can be connected to a Mac Pro without a hub is simply staggering. Some things to keep in mind:

  • Thunderbolt is far, far faster than anything hooked up to an old Mac Pro. Consider consolidating on larger devices with more storage. Yes, you could stick 15 Firewire drives on a single bus, but with drive sizes and RAID devices of today, you don’t need to.
  • Thunderbolt has a smaller device total limit than Firewire, and any device connected to the chain, however adapted (USB/eSATA/FW800) counts toward the total.
  • Some devices only fit the end of a chain (or chain plus adapter) – such as small external drives and some scanners (like the Nikon LS-9000).
  • You will eventually convert everything to SSDs and more modern devices. You might do this earlier than you anticipate.
  • Not all Thunderbolt interfaces are made equally. Some that have dual Thunderbolt and USB 3.0 connections run much closer to USB 3.0 speed.

The USB 3.0 ports will be exhausted faster than you think – an external DVD burner, a CompactFlash card reader, your iPhone cord, and the connection from your uninterruptible power supply (UPS) will suck up all four ports in a heartbeat. One bonus of the iMac is an SDXC card slot in the back of the screen/main unit, and it is plugged right into the PCIe bus – making transfers to the computer much faster than any USB 3.0 card reader. That said, its location is extremely clumsy.

If you need more storage and you’re willing to live with lower speeds, you can always plug USB drives into your NAS or your wireless router.

Keyboard and mice: the Apple wireless keyboard is compact, cord free (important given the USB port issue above), far more reliable than the old, full-size Apple Bluetooth unit, and very difficult to learn if your right little finger is used to touching the right side of any Apple Extended Keyboard. Consider whether you want to keep your old keyboard. The Magic Mouse is brilliant for photo editing because the gesture-based scrolling makes it easier to drag through huge Lightroom libraries, and the square edges make it easier to feel where to right click. Aside from that, the gestures do not help with Lightroom 5 or Photoshop CS6 – which do not support them. None of Apple’s current input devices will displace your Wacom.

Networking: for moving big pictures from networked storage devices, use the Ethernet port. Wireless is nice, but experience now demonstrates that not even AC1900 runs consistently as fast as gigabit Ethernet. One day, maybe. The actual connection speed is one issue – but the bigger one is these days, your computer is not the only thing competing for bandwidth on the router. With Ethernet, the detection, connection, and configuration of printers with their own IP addresses is much, much better.

Must-have software: aside from your usual image editing programs, here are three.

  1. The current version of Carbon Copy Cloner, which can be an important backup tool. If you have huge volumes of photos and use a nondestructive editor, Time Machine is dead-wrong as a backup method. The problem lies in a few things: (1) Time Machine is really designed to work with reasonable quantities of files that are changed from day-to-day (the largest thing with which you would trust it is your Lightroom catalog); (2) a Time Machine backup that contains terabytes of photographs will take days to initiate – and your main drive might fail in the meantime; and (3) Time Machine backups get screwy every so often and have to be redone from zero, which accentuates the risk in (b). With Carbon Copy Cloner, you simply clone your image file directories to another drive, either as directories and files or as sparse disk images. And if and when disaster strikes, you don’t have to try to do a selective restore from a Time Machine disk – you simply copy the files onto a new main drive and go on your merry way (actually, you could simply point your Lightroom library at the clone and keep working while you set up a new main drive).
  2. Mac Product Key Finder Pro. Migration assistant notwithstanding, many programs need to be re-initialized, re-installed, or re-registered when they are moved from one machine to another. It is also likely that you will not want to track every single box, sticker and serial number down from your software (this is an especially acute problem when your most recent Adobe product was an upgrade, and you can’t readily find the box for the original. This program scans your computer and shows you all registration codes and serial numbers.
  3. Contacts Cleaner. This is not imaging-related, but as you are getting your computing life in order in other ways, this will help rationalize, de-duplicate and generally improve the situations with your address book as stored on your iPhone and computer.

Migration advice: one advantage of using a dual USB/Thunderbolt device for your main storage is that you can consolidate all of your photos on that device. Your various SATA and Firewire drives’ data flows through your Mac Pro into the new box, which you then unplug from USB and plug into the new machine using Thunderbolt. Use Lightroom to effect the consolidation, and when you boot up your new machine, all you have to do (at most) is point Lightroom at the new mount point for your old drive.

As for the rest, expect some issues with Apple’s Migration Assistant. As noted above, losing product registrations is the big one. But also watch your permissions. The major reason to use Migration Assistant for your user directories is that it copies the unique identifiers to the new machine; it is a big trickier just to establish a user account on the new machine using your old user name. It is a very, very slow program.

In terms of how you move the data, it seems to be best to use the $29 Thunderbolt to Firewire 800 cable, with your old machine in target mode. Note that you may not be able to mount all drives in target mode, so think hard about other ways to migrate your big data collections on drives 3 and 4. If anyone tells you that Gigabit Ethernet is faster for these transfers, it is highly likely that he has not looked at the actual speeds that each protocol delivers. Firewire 800 on its worst day is better than gigE on its better days.

The bottom line: let’s not be indirect here – if you are replacing a pre-2013 Mac Pro, you can reasonably expect that making a meaningful improvement on its capabilities can easily hit around $5K total: about $3,200 for the machine; $300 for a secondary SSD drive; $150 for the extra RAM; 600 for primary storage; $500 (plus drives) for a NAS. It is still far less than buying a new Mac Pro with similar equipment, but wow. Once the credit card bills are paid, though, the Retina 5K is a great machine.

Fix it now or fix it later?

For every photographic problem that might be addressed at the time of shooting, there always seems to be someone’s glib response that you “can fix it in post.” It is indeed possible to do many things with Lightroom, Photoshop, or GIMP – but is that the best or easiest way to do it? Let’s examine nine common correction operations, how they play out when shooting or in post, and which seems to be the better (or at least most efficient) option.

1. Perspective correction and leveling. Using a wideangle lens (<35mm) at anything but a dead-level position causes converging (or diverging) verticals. In the dark days before Photoshop, converging verticals were mitigated with PC lenses that shifted the lens relative to the film. This shifted the horizon and the effective viewpoint of the camera (10mm of shift compared to a 24mm frame height can move the horizon line more than 40% up or down). Older shift lenses had larger image circles to accommodate this, but they also show chromatic aberration on digital sensors – and they required inconvenient stopped-down operation for viewing and then metering. Newer lenses have electronically controlled apertures that help compensate for some of this. Correcting converging verticals in post-processing avoids the optical compromises and difficult metering, though the “warp” to the frame (which goes from rectangular to trapezoidal) cuts down the frame size, changes the effective aspect ratio of the picture, and compromises fine details if you’re starting with a low-res file. But the bigger problem is that most programs are not really capable of correcting perspective issues without distorting the vertical/horizontal proportions of the picture – generally making things look too tall. DxO Viewpoint has a ratio corrector, but it still requires visual estimation of a viewing angle that you never saw in real life). In terms of misery level, the easiest option is to get a wider lens, get as close as you can keeping the subject level, and simply crop as necessary. Time of shooting.

2. Vignetting control. Older lenses, especially symmetrical ones, often exhibit darker corners on digital sensors (they did on slide film as well, but on the negative film that most people used, this was less visible). Vignetting is a limitation imposed by physics. It also occurs with lenses designed for digital, but in many cases the camera can automatically compensate for a known lens when generating a JPG. At the time of shooting, when generating a RAW file, you basically have only a center filter as a choice. These very expensive filters impose big losses in terms of film speed (typically requiring 1.5x the exposure) and work best at smaller apertures. Even where there is no Lightroom profile for your lens, other solutions such as CornerFix and Adobe Flat Field allow you to shoot control pictures for repeatable corrections in the future – and to shoot with no exposure increase. Post.

3. Fill light. There are those who profess never to use flash and only whatever light is available. No one knows what they do with pictures that exhibit dark eye sockets, awkward shadows, and dominant light sources that point the wrong way. You can fix some of this in post, but simply the raising the exposure in certain parts of the image can make it difficult to maintain a natural-looking result. The major solutions here are to compose to face the dominant light source, use a reflector, or (heavens forbid) use fill flash. Time of shooting.

4. Light balancing (cooling). Low incandescent light provides unique challenges for digital sensors, almost all of which have noisy blue channels. Room light is typically pretty low, and the ISO setting on the camera typically ends up being pretty high, which means more noise across all channels. Using white balancing to compensate for reddish incandescent light exacerbates the problem in the blue channel by amplifying it even more. If you have a steady enough hand to do it, using a 80A (KB-15) filter drops the red and green channels so that the noisy blue channel is not unduly amplified. You lose 2/3 of the light doing this, but it cuts down on chroma noise. Time of shooting.

5. Light balancing (warming). The red channel does not suffer from the noise issues that the blue does. So it is ok to amplify it later. This in itself is not too compelling, but consider how at the time of shooting, warmer, no matter how warm, seems better – and yet in editing, things often look too warm. So consider limiting your filter filter use to an 81A (or KR3) and do any additional warming later. Post.

6. Red enhancement. The didymium red-enhancing filter has largely gone out of production (possibly due to demand and possible due to RoHS considerations). Its effect, which is to suppress “every other color” in the red-yellow range and then everything else past it, is extremely difficult to reproduce in post, if only because the peaks and valleys, occurring every 25nm or so, do not correspond with available adjustments to color in Lightroom (many of these actually fall between colors). Although it might ultimately be possible to reverse-engineer the effect, it would be a pain… Time of shooting.

7. Graduated neutral-density filtration. In color work, at the time of shooting, your only real choice to make the sky darker without a polarizer is a graduated neutral-density filter. The best versions are rectangular and allow you to rotate and move the horizon line. That said, they are much more unwieldy and flare-prone than circular grad filters, which are compact, easy to use, but completely inflexible in horizon line (midpoint of the gradient) placement. And with either, the hardness of the gradient needed is defined by the lens in use (oddly, only the rectangular versions offer a choice of hardness). Longer lenses require a harder cut. Provided that the dynamic range your scene permits it, the better solution is using gradient filters in Lightroom. These are variable for center position, rotational angle, and steepness of the gradient. In fact, they can be combined with other adjustments. The quality loss is minimal for simply darkening part of a scene; usually it is a relatively detail-free area like the sky. Post.

8. Specialty filtration. Softeners, diffusers, cross-screens, diffractors, and the like are filters for which there is no good Photoshop equivalent (assuming, of course, you are into the looks these filters create). Time of shooting.

9. Black and white tone adjustment. If you are into the effects of colored contrast filters on black-and-white film, you cannot very easily bolt such a filter onto a camera with a Bayer filter, because some filters (particularly red) can cause havoc with demosaic-ing. The Channel Mixer function in Photoshop (and Lightroom) lets you selectively raise or drop colors (at least within -20/+20) without too deleterious an effect on the image. The sole exception is the Leica M Monochrom, which having no color data to work with, must be filtered at the time of shooting. Post.

10. Correcting mixed lighting. Balanced fill flash falls apart any time that a flash is being balanced against something with a different color temperature. The most common problem is in room light, where at base ISO, flash essentially becomes the only light source, making the subject bright but the rest of the frame relatively dark. Raising the ISO tends to even out brightness, but it leads to pictures where the background is yellowish and the flash-lit subject looks normal. Although this can be corrected with a lot of work later, the easiest thing to do is to gel the flash with an 85A filter to make its light the same color as the room light. Time of shooting.

None of this is to say that there is anything wrong with post-processing digital images, and in fact, some things can only really be done digitally (fine-tuned and synchronized white balance, distortion removal, sharpening, etc.). But it is to say that a little more care in shooting can cut down on the time and frustration involved in post-processing.

# # # # #

Zeiss C Biogon T* 4,5/21 ZM – and removing the reds

2145

This lens is perfectly usable on the M240. It doesn’t even take that much work.

The Leica M typ 240 presents some unpleasant choices in terms of 21mm lenses: you can spend $3,000 on a Super Elmar 21mm 3.4 and get the sharpest 21mm ever made for Leica – but suffer complex distortion and red edges. The 21-35mm M-Hexanon Dual (which is not a lot cheaper these days than a used Super Elmar) gives you two focal lengths, awesome sharpness and no color shifts – but it gives you a touch of geometric distortion. Everything else presents varying combinations of bulk, color vignetting, low resolution, and general misery. Here at the Machine Planet, we have a certain inbuilt arrogance to try things that conventional wisdom says should not work. The 21mm f/4.5 Biogon is a case in point. And yes, we made it work with a couple of off-the-shelf tools and less than a couple of hours of trial and error learning the ropes.

The good. If this were the film era, the 21mm f/4.5 would be the champ. It is small (barely bigger than a 40mm M-Rokkor), sharp (testing in some reviews to 3000 lines per picture height), well-made, and has about as close to zero distortion as any wideangle lens ever made (for example, it’s lower than the 35/1.4 Summilux ASPH). It also takes normal-depth 46mm filters common to the rest of your lens collection. Here is basically everything you need to know about its stunning performance:

2145

The bad. In terms of conventional performance, the lens is relatively slow in terms of maximum aperture and has the usual light falloff from the center, often exaggerated by digital sensors. You can see from the chart above that it does not get much better as you stop down.

The ugly. The worst thing is that the lens has color shift at the edges. It’s quite severe at first glance. These are the particulars:

  • The red edge extends a couple of MM into the frame, from top to bottom, green and red on the left and red on the right. In the days of the Kodak DCS Pro 14n, this was called the “Italian Flag” effect.
  • The intensity and intrusion of the edges is dependent on selected lens aperture and focused distance. Closer focus and wider apertures mean that the edges are far less obtrusive.
  • There is an overlay of standard brightness vignetting that is characteristic of any symmetrical 21mm lens.

The variable nature of the color shading – why has no one else noticed this? – may well be the cause of claims that the problem “can’t” be corrected or that conventional tools result in under- or over-correction. Once you understand this, it’s easy to solve the problem. Never declare defeat prematurely!

Fixing things up. All solutions to this problem involve some kind of reference image, which is a test shot you make using a white field. You can shoot a white wall, shoot a ceiling with a flash, or use a diffuser. If you shoot through a diffuser, you need one that lacks texture (at small apertures, the ZM 21 can pick up the texture of the paper, even if you have it pressed right up against the lens. Your resulting references will look roughly like this:

L1005380

One very good diffusion material is Yupo polypropylene watercolor “paper,” which, being plastic, has no grain. You can find this in most art stores.

  1. Layer Masks. Some, like Lloyd Chambers, advocate the use of Photoshop adjustment layers and masks to cancel out color and brightness shading. Although this demonstrably works, its shortcoming is that it needs a separate template and action for every permutation of shading (you can, most of the time, get away with four settings: f/4.5 and f/8, at 1m and ∞). It also presents a clumsy workflow that involves leaving Lightroom, going to Photoshop, and then back to Lightroom (and at that point, with a TIFF and not a DMG). For your most OCD applications, this is a workable solution; it’s just not the most batch-friendly or space-efficient solution.
  2. Cornerfix. Long the go-to solution for Leica M8 and M9 users, Cornerfix was originally designed to address the green shift that occurred when you put a UV/IR filter on an M8. This green shift was generally uniform and radial. Cornerfix takes the reference imageand then computes a mathematical mask. Cornerfix works with DNGs and exports DNGs (suffixed “_cf”)Unform, and it has a tremendous range of settings for addressing color shift, brightness vignetting, and the artifacts that result from correction. Cornerfix also shows you the effect of the selected mask on the current image. It also supports batch processing. The shortcoming of Cornerfix, though, is that because it does correction via equation, there are some kinds of color shading that it struggles with.
  3. Adobe Flat Field plug-in. The strangely named Flat Field plug-in is available on the Adobe Labs site. This plugin has virtually no controls and seems to be an automated variant of the layer mask technique. You select the image you want to correct, activate the plug-in, and then give it the reference image. The only controls are for “Color” or “Color and falloff,” which lets you leave in brightness vignetting if you want. The plugin is slow and kicks out another DNG, stacked with the first one, suffixed “_ff.” It does work very well – much better with the 21mm than Cornerfix – and it does not require you to exit Lightroom, but it’s a black-box solution that requires you to select your reference image carefully (because you can over- or under-correct by choosing the wrong one).

The winner: Flat Field. As the only solution that (a) works and (b) does not require shifting from program to program, Adobe’s free Flat Field plug-in for Lightroom is the best solution. Here is precisely how to use it:

  1. Shoot your profiles. Take your sheet of Yupo paper, hold it right in front of the lens (the easiest way is to sandwich the paper between your lens and the glass of a window). Pick your reference distances. We used 1m, 2m, 5m and ∞, but you could also pick your favorite hyperfocal distance. Shoot a test at one f/stop, all distances. Then switch to the next f/stop, all distances.
  2. When you are done, import the files into Lightroom. Immediately rename these with a designator that shows lens, aperture, distance. This will result in a name like “2145-80-inf” for 21/4.5, f/8, at infinity. Export all of these as original DNGs to a folder that is easy to find (think about “profiles” in your “Documents” folder.
  3. Install the Flat Field plugin.
  4. When you want to do a correction, select the picture(s) you want to fix. All of the ones you do together should have the same shooting aperture and distance (the M240 records a computed aperture value, and you should be able to tell by the composition where the lens was focused).
  5. Go to File–>Plug-in Extras–>DNG Flat Field–>Apply External Correction. This will pop up a Finder or Explorer window to select the profile from #2 (Lightroom does not let you choose from the catalog).
  6. Choose “color and falloff.” Although vignetting may seem cool in theory, symmetrical lenses need all the help they can get.
  7. Run it.
  8. You will then get a new file adjacent to the original with the “_ff” suffix. You can now manipulate this as if it were the original.
  9. If you get too much correction, try a reference photo shot at a closer distance. If you get under-correction, go for a farther distance.

Upshot. It is tragic that so many people started unloading these lenses based on a red-shift issue that is so simple to correct with modern tools. The ZM 21/4.5 is a fantastic optic that can now make the jump to modern digital Ms. And there is no reason why the same techniques could not be used to adapt other wideangle lenses to Ms or wideangle M lenses to things like the Sony A7 series.

L1005374

L1005374_ff

L1005358 L1005358_ff L1005340 L1005340_ff