Who says Lego bricks are only good for causing foot and back injuries?
Welcome to the world’s crudest 3D camera: four Duplo bricks, two DxO One cameras, and about half a meter of packing tape. With a stereo separation of about 120mm, forget about taking pictures of anything closer than 15 feet. But oh, the scary places you will go.
Surprisingly, with the OLED-frame-assist function on, the cameras don’t have much trouble focusing on exactly the same subject, which solves one weird technical hangup.
This is just a quick note on a technical problem that plagues digital Leica cameras when used with older Nikkors: back focus. It is gratifying to know that Leica has finally recognized that many of its lenses don’t work so well on digital Ms due to “focus errors” that allegedly compound over the years. The real reason is probably more that film planes are actually and unintentionally curved, and a lens that makes the grade at the center there back-focuses elsewhere.
I was struggling a bit with a 10.5cm f/2.5 Nikkor, which though absolutely lovely aesthetically is one of the worst-engineered Leica lenses ever from a mechanical standpoint. And it back-focused. It back focused more with some Leica M adapters than others, but still.
Strike one with this lens is that the aperture unit rotates along with the entire optical unit. This means that if you adjust the collimation washer (for reasons I don’t fully understand, it’s always 0.05mm needed with any lens – just about the same thickness as Scotch tape), you also then have to reset the aperture ring to read properly. Also not 100% sure that infinity optical focus was really the problem.
Strike two is that the amount of front cell movement needed to compensate for back focus is absurdly great. So here, you’re messing around with focal length, but this the same way the MS-Optical Sonnetar gets calibrated…
Strike 3 is that the RF cam is not adjustable at all, with the tab pushed by a plunger running on a wheel that fits in a spiral track in the helicoid. Guess how this tab was adjusted for infinity at the factory? With a file. It makes sense, in a way. Calibrate the fixed infinity point on the focal plane by shimming the optical unit, calibrate focus at infinity by grinding the RF tab, and fix close focus by shimming the front cell. But it utterly sucks when you find out, 60 years later, that the tolerances that looked good on film with a Leica IIIc look like holy hell on digital.
So when you are dealing with focus errors, you have to imagine that the standard is a 51.6mm lens. At that focal length, if the RF matches the film-plane focus, the focus will always be correct, even if the infinity stop of the lens is beyond “infinity” on the scale.
For a telephoto lens, the rear cam still pretends it moves like a 51.6mm lens, but the actual optical unit moves much further. Hence, in a lot of cases, you can simply use a thinner LTM adapter (I think I’ve written about this before… somewhere). Most cheapo ones are thinner than the 1.0mm they are supposed to be.
But there is a different way to hack this with the 135mm, 105mm, and 85mm Nikkors: simply apply a thin and even coat of clear nail polish to the RF tab on the lens. This is a trick that you could theoretically do with lenses that have a rotating RF coupling ring (not tab), but it works exceptionally well with the Nikkors because the camera’s RF roller simply rests on the tab and doesn’t roll along it. This means that you only need to get the coating thickness right over a very short distance. Materials needed:
- Sally Hansen clear top coat (not “nail nourishing,” just the hard kind).
- CVS Beauty360 brand Nail Polish Corrector Pen (essentially a marker full of acetone that you can use to thin or remove extra nail polish).
- LensAlign focusing target (if you own a Leica, you really want one of these anyway, just to figure out what the devil all your lenses are doing as you stop down).
- Reading glasses.
So basically all you need to do is put a very thin coat of polish on the polished surface of the tab. Let it dry for 20 minutes. Here is the goal:
- At f/2.5, your focus should be such that the 0 point is barely focused, with most of the DOF in front.
- At f/2.8, your focus should be dead-centered around 0. The lens is actually way sharper here than at f/2.5. Doesn’t seem like much of an aperture change, but it is.
- At f/4, your focus will be such that 0 will barely be in focus, with most of the DOF to the rear.
- From f/5.6 down, the DOF will grow so that 0 is always in focus.
If it works, you’re done. The focusing errors this might induce further out are subsumed by depth of field increasing. If you need another coat, add one. If you are now front-focusing too much, use the Corrector Pen to remove some of the extra (or use a very fine nail buffer to remove some).
Never file or try to grind down the tab if your lens is front-focusing. Unless you can do it totally square, your lens will behave differently on different cameras. Leave this situation to a pro.
The word Columbusing has become a thing for describing the phenomenon by which a person believes that he is discovering something that in reality had always existed. It certainly seems possible that this is happening when people try to write reviews of cameras or films. I have now read hundreds of the film reviews in particular, and as an old-time Gen Xer, I realize that these writers are in a position to do one thing: demonstrate whether they as photographers can get a good image out of the material. The rest is of limited use.
Cachet qua cachet
Often, but not always a film review article will take this rough agenda. I think if you go back on my old site via the Wayback Machine, you may even find me doing this (though at the time I was writing about film, the cachet step wasn’t there, since almost all of today’s discontinued films were still sold then… In the early 2000s, when most of those pages were being written, film was just starting its tailspin.
Cachet signaling. This is the prelude. Usually consists of a description discussing how “those in the know” understand Film X (likely discontinued before the author ever picked up a camera, or in some cases was born), some information cobbled together from Google searches, and how the author came into possession of the now-expired film of unknown history, storage conditions, etc.
The low-sample test. Film X is frequently shot with a camera of significant vintage and unknown meter accuracy, sometimes used in conjunction with a meter of a certain age. Film is either commercially processed or done once, whether by the book, by guess, or by the Massive Film Development Chart (which can also be a crapshoot). Bonus points are awarded for random-guess compensations for the film’s age. Double secret bonus points if a restrainer is involved.
Abstraction to what the film is “about.” Author concludes that Film X is magical for xyz reason and that you should pay some scalper (or re-labeler) big time to get it.
Just stop here for a second. I am impressed at how good some of these writers are at photography. They have an eye. They can take a good picture and make a pleasing output. But nothing else they are doing is very instructive because their experience is not accurate or repeatable.
Call it a generational thing (or maybe half-generational) thing. As a group, Baby Boomers walked away from film photography and neither preserved nor transmitted decades of institutional knowledge on the subject. Most Gen X people know film as something you would shoot and take in to be processed. Even for them, unless they made pictures professionally or for a hobby, film photography became disposable as soon as digital became cheap. Which brings us to the millennial children of boomers: a knowledge discontinuity leads to satisfying feelings of discovery. But just as Columbus’ setting foot on Hispaniola did not mean a “new world” for peoples who were already there, superficial film reviews provide little (and really no) novel information.
Do b/w films really have looks?
But let’s back up to something in the cold light of day: with a few exceptions that came really late in the day, film was never really designed to have an aesthetic “look.” It was always designed to have a function. That drove aesthetics. To a point.
Almost 20 years into the 21st century, conventional black-and-white film has no real mysteries. For most of recorded history, film followed a pretty regimented set of tradeoffs: slower film had finer grain and finer tonal rendition. Things got grainier and lost dynamic range as film increased in speed. Although tablet grained b/w films helped increase performance, most of what you see in black and white films is the product of design tradeoffs rather than some deliberate aesthetic proposition.
Recall that the basis of film photography was science. I would suggest that, after a lot of time developing film, the differences between films of a given type and speed are actually relatively minor compared to the effects of varying developer, time, temperature, and agitation. Let’s take an example: Tri-X and TMY are different films, right, Tri-X with an S curve and TMY straight? Here is that classic Tri-X characteristic curve.
Ok, and here is your philistinic, “robot,” “soulless,” TMY, also developed in D-76:
Now develop both in T-Max developer and overlay the curves (black is TX, red is TMY). Don’t have a heart attack, but there are far more similarities than differences in response. Maybe a minute’s difference in developing time. Oh no…
But wow, this was like the holy of holy in differences in “look,” right?Nothing should be very surprising here; tablet-shaped film grains aside, the reaction of silver halide molecules to photons has not changed at all in 150 years of film photography.
So today, some films are grainier than others, some are contrastier than others, some are faster than others, normalized for a developer. But the choice and deployment of developer (if not also every other step of the output chain) can hugely influence or obliterate the “curve” which is the seat of the “look.” In other words, film is just a variable, and from a tone and grain standpoint, perhaps it’s far less of one than we thought.
Did consumers ever actually understand color film?
When you get to color film, things get more complicated because these start with silver halide, which is bleached out and functionally replaced with organic dyes. Color dyes are fickle.
When it was still made in a bunch of varieties, color negative film itself was somewhat inscrutable to anyone but pros and the very serious amateur. Moderately skilled (or more accurately, moderately informed) photographers knew that some types of film were better at skin tones than others (such as Kodak Vericolor III), but for the Joe Average, who had a skill level equivalent to most people writing about film, pretty much every C-41 negative film went through a minilab/printer, which was a highly automated way for drugstore personnel to make magic from your little canister and hopefully not destroy the negatives in the process. If you were a pro, you would send your film out to a pro lab where professionals would make magic from your little canisters of film and hopefully not destroy the negatives in the process.
Although competing brands of film within a certain type (color negative, color slide) used different methods of getting to the “right” color, skin tones were the pivot. Color, oddly, never really got more differentiated than high-contrast/saturation (Velvia, Portra VC, etc.) and regular (Provia, Ektachrome, Portra NC…).
Did you ever notice how much people hate on Kodak ProImage 100 for being excessively grainy and undersaturated? Aside from slight desaturation, it’s essentially where 100-speed film was when people stopped putting money into developing 100-speed consumer color film. The point-and-shoot camera – typically with a slow lens – put a high premium on 400-speed performance, and that’s where manufacturers went. The faster film got to the point where Kodak HD200 and 400 were far smoother than good old GA-135. Here is an easy conversion from consumer to prosumer to pro:
- Gold 100 gen 4 » Extinct » ProImage 100 (rebalanced)
- Gold 200 gen X » ColorPlus
- Gold 400 gen 6 » some other steps » Gold Max 400
- Ektar 125 » Ektar 100 » Royal Gold 100 » Extinct » Ektar 100
- Royal Gold 200 » Kodak HD200 » Extinct
- Ektar 400 » Royal Gold 400 » Kodak HD400 » Extinct
- Vericolor III » Portra 160NC » New Portra 160
- Portra 160VC » Replaced by New Portra 160
- Portra 400NC » New Portra 400
- Portra 400VC » Replaced by New Portra 400
- Portra 400UC » Extinct
Slide film might have been even more mysterious — and represented a medium that spanned the absolute best professional photography and the worst amateur work feared by man. And nothing in between. You either had it or you didn’t. Transparency film was sold in large quantities to tourists and people wanting to shoot color in the really old days. Which made a lot of sense when a goddamn color photograph was a big deal, even if it took 6/12/36 exposures to get one good one. Kodachrome was a tri-layer black and white film that got an infusion of dye during processing. Slow, sharp, permanent, and capable of delivering a nice looking picture assuming the constellations were lined up. And if they weren’t, blown highlights, blocked shadows, and blue. Slides were the ultimate measure-twice, cut-once medium — but few people bothered to measure. Ektachrome and Fujichrome made it cheaper and easier to generate huge boxes of vacation slides that no one wanted to see — and ultimately faded out transparencies that no one could see.
Today, unless you plan to look at tiny positives backlit by homemade ground glass after the Zombie Apocalypse, or have brought some friends over, Buffalo Bill style, to watch vacation pictures projected on a screen (“it puts the slides in the carousel”), digital photography does everything slide film did – but better. Where you can vary the ISO, get more dynamic range, infinitely adjust contrast and saturation, and crop at will, it’s hard to make the argument that Ektachrome came back for anything but nostalgia and motion pictures. Which is a worthy reason. Let’s just not pretend it’s scientific.
In addition to allowing things to happen that could never happen with a filter-based minilab, the rise of the Fuji Frontier in the late 1990s was really the nail in the coffin of film-awareness. With hyper-sharpening, dynamic range compression, and ultimately, smart automatic operation, the Frontier made every photo look perfect. The technology is not unlike how people deal with negatives today: develop, scan, print (in the case of the Frontier, onto photo paper, using a laser). Today, the Frontier’s weirdly regimented view of the world lives on in the hackneyed wedding presets used on Lightroom by an army of semiprofessional shooters using Canon 5Ds.
And if you remember old film packaging, there is the warning that “color dyes in time may fade” (Gospel of Eastman Kodak, K41:1). Everything on earth is capable of influencing the colors and balance of color films: lot, storage temperature, age, exposure, environmental radiation, magnetic fluids, and phlogiston. The same goes for the output media, which if you’ve seen old Fujichrome slides, can be interesting.
That’s part of why the support infrastructure was so complicated, whether it was a minilab computer or CC10, 20, and 30 filters in cyan, magenta, and yellow. And why pros – once they had a particular lot of film dialed in – like a particular lot of Ektachrome – they stayed with it as much as possible. And even pros sometimes had to lean on color correction experts at labs to make every one of those Glamour Shots® perfect.
Hopefully you have not found this discussion offensive, but as an almost old person, I am not at all hesitant to tell you that everyone in their 20s has a Dunning-Kruger delusion when it comes to the technical aspects of photography. As someone who was there for the twilight of mainstream film photography, I would mostly observe that until the bitter end, film R&D was aimed at making the medium a neutral one that could be manipulated via development, printing, or even scanning – and that today, you can easily mistake random errors for some intentional aesthetic balance.
This is an article originally written in 2001; with a lot of updates.
How did these things get started?
The former Fujisawa-Shoukai had quite a bit of pull over Konica. Recall that by 1992, Konica had made what was seen as its last serious film camera, the Hexar AF, with its legendary 35mm f/2 lens. F-S, as we will call it here, commissioned in 1996 a run of Hexar lenses in Leica thread mound (LTM). This was long before the what people in the U.S. called a “rangefinder renaissance;” in fact at the time, very little in LTM was being produced in Japan, with the exception of the Avenon/Kobalux 21mm and 28mm lenses.
The first product of this program was the 35mm f/2L Hexanon, which looked like this:
This lens is simply a clone of the Hexar AF lens, right down having the same filter size. The coatings look identical, which is not a surprise. Consistent with some other contemporaneous LTM products, it did not have a focusing tab. On close inspection, the scalloped focusing ring looks like that on a Canon 35mm f/2 rangefinder lens, or more contemporaneously, the 21mm Avenon/Kobalux lens. The chrome finishing on an alloy body is reminiscent of modern-day ZM lenses. None of this, of course, will disabuse you of the notion that the Japanese lens production industry revolves around common suppliers. This lens shipped with a black flared lens hood (no vents) and a bright sandblasted chrome “Hexanon” lens cap that fit over the hood.
F-S would then go on to commission the 50/2.4L (collapsible) and 60/1.2L Hexanon lenses. The latter is famously expensive now; I have an email from F-S where it was 178,000 yen (about $1,400). The 50/2.4 will get its own article here.
In 2000, around the time that Avenon was re-releasing its 21mm and 28mm lenses as “millennium” models, F-S had another run of the 35/2 made. These were at least superficially different from the silver ones:
- At the time, black paint was all the rage, so the lens was executed in gloss black enamel and brass. The enamel in the engravings is almost exactly the Leica color scheme.
- The filter size decreased to 43mm, the aperture ring moved back, and the focusing ring thinned out to give the impression of “compactness” and justifying the “ultra compact” – UC designation that was historic to some Konica SLR lenses.
- The focusing mechanism changed to a tab (which helped justify the thinner focusing ring and lighter action).
- The coatings changed to a purplish red to help support the notion of “ultra-coating.” As you might know, multicoating can be customized for color.
The close-focus distance (what would be the third leg of a UC designation) and focusing rate of the helicoid (0.9m to ∞ in about 1/4 turn) and overall length did not change. The new lens was priced at 144,000 yen, which in dollars would have put it at just under the cost of a clean used 35/2 Summicron v.4 (at the time, these ran from about $700-1,200) and about half of what a Leica 35mm Summicron-M ASPH would cost.
Handling versus Leica lenses
Since both of these are optically identical, it might make more sense to discuss the ways in which these are similar to, or different from, the vaunted Summicron v4 King of Bokeh License to Print Money®. They are both like the Leica version but in different ways.
The UC has the same smooth tab-based focusing as the Summicron. It is very smooth and fluid. That said, the aperture ring is very “frictiony.”
The original L has a focusing feel a lot like a Canon RF lens, owing to the similar focusing ring, which has more drag and no tab. The aperture ring, however, has the same “ball-bearing-detent” feel as the Leica.
The overall length of all three lenses is similar, though as noted above, there is something of an illusion that the Leica and UC are smaller than the L.
The Konica lens, like the Hexar lens it was based on, is a clone of the 3.5cm f/1.8 Nikkor rangefinder lens, but for all practical purposes, the Hexanon is the same lens as the Summicron 4. As you can see, there is a very smooth falloff from center-to-edge wide open and pretty much eye burning sharpness at f/5.6,
Whoah. That looks familiar! Below is the Leica 35/2 v4 as shown in Puts, Leica M-Lenses, their soul and secrets (official Leica publication). Except the Summicron’s optimum aperture is a stop slower.
On interchangeable-lens bodies, all three lenses have the same focus shift behavior, requiring a slight intentional back-focus at f/2 and front focus up to f/5.6. It’s not like on a 50 Sonnar, but it’s there.
The original chrome version is a lovely lens and a nice match for chrome Leicas, at about 1/3 the price of a chrome Summicron v4 (yes, they exist…). If you like Canon lenses, you’ll be right at home with it. On the other hand, the UC version is smooth and sexy but getting to be as expensive as a 35/2 Summicron ASPH, which is actually a better lens.
People understand why tilt lenses exist – making super-expensive Canon DSLRs produce pictures that look like they were taken with a toy camera (or making the subjects themselves look like toys). No one knows, though, why shift lenses were once a thing. It’s all a matter of perspective.
The truth, from a certain point of view
Photography always has (and always will) present this problem: needing to fit a large object into a frame that is constrained by lens focal length. Conceivably, with a superwide lens you could, but then you end up with a lot of extra dead space in the frame. Which defeats the purpose of using large film or sensors.
If you want to get the whole thing in frame with the minimum number of steps or expenditure of time and money, your choices are to use a really wide-angle lens, tilt a camera with a more moderate wide-angle up, learn to fly. All of these are sub-optimal. First, the really wide-angle lens is great in that you can capture the top of the object without tilting the camera. The problem is that making an engaging photo with a wideangle is actually extremely difficult – because it tends to shrink everything. Depending on how the sun is, it also stands a better chance of capturing the photographer’s shadow. Second, tilting up a camera with a more moderate wide-angle lens “up” turns rectangular buildings into trapezoids, which works for some pictures but definitely not others. Finally, learning to fly is difficult. But watch enough Pink Floyd concert films, toke up with the ghost of Tom Petty, or study Keith Moon’s hotel swims, and you might.
Do you skew too?
Assuming you are reasonably competent, you can correct perspective using software, by skewing the canvas. This is a take on the old practice of tilting the paper easel with an enlarger. This was a limited-use technique, generally practiced by people who could not use view cameras and tripods but still had to come up with a presentable representation of a tall object. There were (and substantially still are) three issues here: crop, depth of focus, and dis-proportion. First, the crop came from the fact that tilting an easel meant that the projected image was trapezoidal and not rectangular, meaning that from the get-go, it had to be enlarged until the paper was filed. This still happens with digital. Second, the depth of focus issue is related to the fact that enlarging lenses are designed to project to a surface that is a uniform distance from the enlarger (i.e., projecting one flat field onto another). You would have to stop down the lens severely, or use a bigger focal length, which in turn required a taller enlarger column to maintain the same magnification.
The digitization of perspective correction uses computation to project the flat image onto a skewed plane, using interpolation and unsharp masking. This solves the apparent sharpness issue, but it degrades quality. Finally, dis-proportion comes from the fact that straightening converging verticals starts from a place where certain details are already compressed via the original perspective. For example, looking up at a tall building from a short distance, the windows look shorter (top to bottom) than they would if you were looking straight at the window from its own level.
So even when you manage to re-skew the canvas/field/whatever, you now have an image that is too “fat.” On enlarging paper, you would be forced to make a cylindrical correction to the negative (which is not practical in real life). On digital, there are specific transformations that you can perform to correct (for example, the adjustable ratios on DxO Perspective and Lightroom.
So skewing is a useful technique, but it’s still better to skew less.
Shifting your thinking: the mirror years
View cameras have used the concept of shift and tilt to adjust for situations where the viewpoint was wrong (shift) or depth of field was insufficient (tilt). Raising the front standard of a bellows-type plate camera was always standard practice to improve photographs of tall objects, especially in an era where wideangle lenses were not super-wide by today’s standards. Lens board movements were easy to achieve because there was always some distance between the lens mount and film plane in which to insert a mechanism to raise the lens relative to the film. And because there is no control linkage between the lens/shutter and the rest of the camera, you’re not losing automation. You never had any!
But these cameras were not small. The smallest bellows-type camera with lens movement features was the Graflex Century Graphic, a delightful 6×9 press-style camera. On many bellows-type cameras, though, there was no real provision for using a shifting viewfinder. The press-style cameras had wire-frame finders that provided a rough guide, but nothing could tell you whether the lens was actually level outside a gridded ground glass. Later in the game, the Silvestri H would present as the first camera with automatic finder shift, as well as a visible bubble level. Linhof used a permanently-shifted lens assembly (and viewfinder) on the Technorama PC series, and Horseman provided shifted viewfinder masks for the SW612P, though these were available only as “all the way up/down” or “all the way left/right.”
The shift mechanism, though, could not be adapted to SLRs easily due to three constraints:
- Most SLRs lenses are retrofocal – meaning that the nodal point of the lens is more than the stated focal length from the imaging plane. It takes a ton of retrofocus to insert a shift mechanism into an interchangeable lens that has to focus past a mirror box. More retrofocus means bigger lenses So when perspective control lenses began to appear for SLRs (35mm and 6×6), they were huge. Maybe not huge by today’s standards, but a 72mm filter size is pretty big for a Nikon SLR whose normal filter size is 52mm.
- To achieve an image circle large enough to allow shift around what is normally a 24x36mm image circle, it is necessary to use a wide field lens and stop it down severely (illumination with almost any lens becomes more uniform as it is stopped down).
- Most cameras can only meter PC lenses correctly in their center position, wide-open. Where shift mechanisms eliminate direct aperture linkages to the camera, you’re back to the 1950s in metering and focusing – then shifting – then manually stopping down to shoot (now corrected by the use of electronic aperture units in $2K plus modern Nikon and Canon PC lenses).
Viewing is not a lot of fun with 35mm SLRs; when stopped down, PC lenses black out focusing aids (like split prisms and microprisms) and still require careful framing to keep parallel lines parallel. So you need a bright screen – plus a grid or electronic level. Suffice it to say, a lot of people regard perspective control to be a deliberative, on-tripod exercise when it comes to SLRs and DSLRs. Maybe it’s not.
A new perspective: full frame mirrorless?
So here come mirrorless cameras (well, they came a while ago). Now you can fit any lens ever made to any mirrorless body. The optical results may vary, but at least physically, they fit.
— Getting the lens in place
So I grabbed the nearest available PC lens I could find, which was a 28/3.5 PC-Nikkor. Not AI, not even from this century. Released in 1980, it is a beast. I plugged this into a Konica AR body to Nikon lens adapter, and from there into a Imagist Konica lens to Leica body adapter. Why all these kludgy adapters? The answer is actually pretty simple: the Imagist has the correct tolerance to make infinity infinity, and the Konica adapter does the same. This is not a small consideration where you might be zone focusing a lens.
Then I plugged this kludgefest into a Leica M typ 246 (the Monochrom). Because why not start with the OG of mirrorless camera platforms? Of course, you can’t use a rangefinder with a Nikon SLR lens, so I plugged in an Olympus EVF-2 (which is the ‘generic’ version of the Leica EVF-2.
— Getting it to work
The Nikkor has two aperture rings. One is the preset, where you set your target aperture. The other is the open/close ring, which goes from wide-open to where the preset ring is set.
I turned on focus peaking and set the preset for f/22 and the open/close for f/3.5. I was able to establish that infinity was correct.
Next, I stopped down the lens (both rings to f/22), expecting that just as on an SLR, the EVF would black out. Worked perfectly.
I hit the “info” button to get the digital level, and it was off to the races. The lens has a rotation and a shift.
— But how well does it actually work?
The functionality is actually surprisingly good. On a Leica, it’s just stick the camera in A, stop the lens down to f/16 and 22, and point and shoot.
The digital level obviates the use of a tripod or a grid focusing screen, and you really just frame, turn the shift knob until the perspective looks right, and there you go. There are a couple of limits
You can’t use maximum shift along the long side of the film, but the only penalty is a little bit of a tiny shadow in the corner. And that’s with a full-thickness 72mm B+W contrast filter. You get 11mm shift up and down (i.e., along the short dimension of the firm) and 8mm left and right (nominally; as I stated, you can get away with more under some circumstances).
Aside from that, there are some minor annoyances like making sure you haven’t knocked the aperture ring off the shooting aperture. Or knocking the focus out of position (it’s a very short throw…).
BUT THE DUST! And here is the rub – shooting at f/16 and f/22 brings out every dust spot on your lens. Normally, you would shoot a Leica M at f/5.6, f/8 max. But PC lenses – like their medium and large format cousins – are designed to max out their frame coverage at very small openings. So I had never cleaned the sensor on my M246 in four years, and I got to spend an evening working on a hateful task that included swabs and ethanol and bulbs and the Ricoh orange lollipop sensor cleaner.
— And how sharp?
Very. Diffraction is supposed to start becoming visible at f/11 on this combination at 1:1, with it showing up in prints at f/22.
Pictures stand up to the old 1:1 test, except in the corners where you have over-shifted along the long side. Recall that in lot of situations, two of the last bits of corner are usually sky, where a tiny amount of blur is not going to be of any moment.
How well this will work on a color-capable camera is a question, especially since lateral color would come out. But right now, this is posing the most acute threat to 6×4.5 cameras loaded with TMY.
Well, you have that day where you feel like you want to step off the film train. Oddly enough, it was not because some digital sensor came along with massive resolution, or film hit $8 a roll, or the EU outlawed developing chemicals. Or you name the calamity.
Here, it was the product of well-meaning backward-compatibility. I had this thought as I was looking at a roll of TMY shot with a Silvestri H that probably cost $10,000 new. It uses standard-style roll backs made by Mamiya that are bulletproof and have nicely spaced frames. The pictures themselves were sharp, undistorted, and perspective-corrected. But they were ruined for optical printing because backing paper numbers – useful only to people with red-window cameras – transferred onto the emulsion. I felt like Constantine the Great, kinda. I looked in the sky, and the sign of “Kodak 14” was shining down on me. In this sign you will [be] conquere[d].
Browniegate (let’s give it a good name, at least) occurred because Kodak had an issue with backing paper on 120 film (this affected some lots made between 2-4 years ago). Environmental conditions could cause backing paper frame numbers to transfer onto the emulsion of the film and show up in low-density areas, especially the sky. Lomographers probably loved this. Everyone else, not so much.
Kodak handled this reasonably well (but not optimally),* and it has been very good about replacing defective film. Given that they had few choices for backing paper (1-2 suppliers of this worldwide) and that they probably couldn’t anticipate the full range of environmental abuse film might experience in storage, I cut them some slack. We all accept that any time we use film, we could end up with no pictures. Grab the fix instead of the developer. Leave a rear lens cap on. We’ve all been there. But the backing paper thing is not within user control. Unlike the bad roll of film that comes up every hundred thousand rolls of film, the frame number thing hits more often. It’s not like lightning. It’s more like a tornado ripping through farm country.
The what is one thing. But the why is another. Laying aside bad material choices by the backing paper manufacturer, the underlying issue is that frame numbers on paper backing were last needed for serious cameras in the 1950s (the Super Ikonta C may be the last one), and the ruby-window method of seeing what frame you are on persists mainly in (1) Brownie cameras whose design goes back to 1895; (2) Lomography-oriented products; and (3) current large-format roll holders that should know better. There is actually no excuse for this last category, since there is no patent for frame counters that is still valid, and roll backs are only made in LCCs now. It’s the support of these older and cheaper cameras that requires frame numbers past #1 – and in a weird way, the shadow of the 19th century is still causing problems in the 21st.
The bigger question this begs is this: if backward compatibility is a significant part of the business case for 120, does that mean that when the ruby-window market fizzles out, it will take serious medium-format photography with it? Best not to think about that.
*By not optimally, it would be nice to have a new catalogue number for new backing paper, so that people trying to buy film from B&H for critical use would not get stuck with old product – like I did when I was going to Singapore, bought 20 rolls of TMY in March 2019, got 158xxx TMY, and had backing number transfers on every roll of film, with up to 75% of 6×4.5 frames being affected on any given roll. Or maybe use a laminated paper that has punched-out numbers and not printed ones.
Mark my words (as if they are that important): the future will not look kindly on the gimmick-bokeh that dominates the aesthetic of 2000s photography, just as we get a chuckle out of 1970s pictures with excessive sunsets, lens flare, and nipples. People yet to be born will wonder why photographers in the 2000s took insanely expensive lenses, better than any ever designed to date – and cheaper – and then used them to simulate astigmatism, near-sightedness, and macular degeneration. The most charitable explanation will be that photographers were trying to show solidarity with the visually impaired.
The buzzword (today) is subject isolation. But why are we isolating a subject from its context? What’s wrong with the context? Are we creating millions of pictures of the same peoples’ faces with nothing else in the shot? Are they people or products?
In the present, good composition can still be shot at f/16. Small apertures are also obligatory on larger-format film cameras because a lot of those lenses have serious light and sharpness falloff at the edges at their maximum apertures, especially with the focus at infinity. Nobody buys a $3,000+ 6×12 camera to get the types of pictures you could see from a $250 Lomo Belair.
There is a reason that early autoexposure SLRs used shutter priority: if you had to make a choice for what would be in focus, it would be your subject; if you had light to spare, you’d want use as small an aperture as your lowest desired shutter speed would support. And that thinking underpins historic picture-making. Intentionally shallow depth of field is not a feature of most of the world’s most iconic images. Arnold Newman did not need shallow depth of field to shoot Stravinsky. Eugene Smith did not shoot Spanish policemen as an exercise in subject isolation. And David Douglas Duncan captured every crease in the face of an exasperated Marine captain. How about Richard Avedon with his Rollei and every celebrity on earth? There are exceptions, but throughout history, wide apertures were primarily driven by a need to keep shutter speeds high enough to avoid blur. Light constraints are not such a consideration when ISO 6400 is a thing on digital cameras.
The worst part about bokeh, and the one no one talks about, is that it can actually be unpleasant by causing eyestrain (or maybe brain-strain). In many ways, a human eye – if you looked at the whole image projected on the retina at once – resembles a cheap Lomo-type lens: sharp in the middle (the fovea) and blurry at the edges. It even has a complete blind spot (the punctum caecum). The eye has a slow aperture, estimated by some to be f/2.8. But, dammit, everything looks like it is in focus. That’s because your eyes are continuously focusing on whatever you are looking at. Your brain is continuously piecing together fragmentary information (the blind spot thing is incredible – vertebrate biology beat Adobe to content-aware fill by about 500 million years). The end result is what looks (perceptually) like a scene where everywhere you look, things are in focus. It’s actually pretty amazing that this works.
In every photo, there is a compression of three dimensions into two. More depth of field allows your eyes to wander and allows you to process the scene fairly normally. When you look at bokehlicious pictures, definition is concentrated on one object (and often just a piece of it). You might find your eyes (or visual perception) constantly trying to focus on other aspects of the scene besides the subject. But neither your eyes nor computational photography can remove extreme artifacts once they are “flattened.”
Scroll back up to the picture at the top. Same composition, shot at f/8 and f/1.5 with a 50mm ZM Sonnar. Look left and look right. On the left, you can look almost anywhere n the scene and see whatever visual element you want to scrutinize, at at least some level of detail. On the right, you are always and forever staring into the Contractor Ring®. You can try to focus on other elements of the picture on the right, but the information simply is not there. Need an aspirin?
And it can be fatiguing, more so that the aesthetic is played out and that anyone with an iPhone X can play the game. Pictures with ultra-shallow DOF don’t look natural. They are great every once in a while, or if you need a 75/1.4 Summilux to get an otherwise-impossible shot, but otherwise, get off your ass and move the camera (or your subject) into a position with a reasonable background.
# # # # #