Yeah, we still have it. Not the magic touch, but the scanner (with the magic touch). Potions of the below appeared on dantestella.com years ago; I have added some updates and new notes on light sources, a subject on which there is tremendous misinformation on the ‘net.
What is a Pakon?
The company is best known for its plastic slide mounts, which in the old days you would buy to fix the cardboard mount that your projector mangled. But as division of Kodak, it began to produce minilab scanners (the F135, F135 Plus, F235, F235 Plus, and F335).
Many people are familiar with the Pakon F135 and F135 Plus, which have become very popular as tabletop scanners. What makes these scanners genius is that they do scanning on one pass, without annoying prescans or the rat-a-tat-tat of stepper-motor driven film scanners.
The PSI software is even more ingenious. Basically, you feed it a roll of film, and:
- It can take strips of film up to and including a 40-frame uncut roll.
- It scans all of the frames as a bitstream image at rates in the hundreds of frames per hour, with Digital ICE turned on.
- It uses DX codes on the film to determine the frame number and applies that to the filenames of the resulting files (JPG, TIFF, or RAW, to your preference)
- It automatically finds the frames, DX coding or not. On its software, you can adjust framing after the fact.
- It quickly and with astonishing accuracy corrects color and exposure, even on frames with exposure errors or fading.
- It spits out all of the files, in sizes up to 3000×2000 (this is a 2000dpi scanner) onto your output drive or media (some earlier models require software fixes to output at this resolution).
- It does not require a special console, just XP (real or emulated) with an unformatted N partition on the boot drive. You install the software and go to town.
If you are feeling especially technical, you can use the TLXclient software, which allows different bit depths, full-out-to-the-edges framing, unusual frame sizes (you can scan individual half frames or Xpan frames – or output them as full-resolution strips), and many other things. It comes into play more, one would surmise, if the Pakon is your only scanning machine.
How is a Pakon different from other negative scanners?
This minilab scanner differs from your Coolscan in a few key ways.
First, they are designed for speed. An F235 Plus, for example, will do 800 frames an hour at 3000×2000 resolution. Yes, that’s 33 rolls per hour, or a roll or 24 frames about every two minutes. Most people would burn through a lifetime of black and white 35mm negatives in a few days of work. The 135 series runs at about half that speed with ICE off.
With Digital ICE turned on, the 235 Plus still does 400 frames an hour. Reduce the resolution to one of the lower settings (such as what you would use for web-sized pictures or 4×6 prints), and it really flies. Part of the speed comes from obviating negative carriers, the cumbersome and relatively fragile part of any consumer-grade scanner. The rest is dispensing with the prescan, which introduces more complication in the process.
These are the relative speeds of Pakons vs each other (Digital ICE off / Digital ICE on) for a maximum-resolution scan. This is per hour. Loading film in strips slows this down slightly. This is the order in which the machines were released:
- F235 (400 / 250)
- F235 plus (800 / 400 )
- F135 (293 / 220 @ 1500 x 2000)
- F335 (1053 / 790)
- F135 plus (477 / 387)
One thing that is clear is that the speed of Digital ICE processing ramped up to where it was very close to the limit of the scanning speed. But that is of no moment if your life is all silver b/w or Kodachrome, where dust and scratch removal doesn’t work.
Second, Pakon scanners are designed for a minimum of human intervention. Despite the availability of an SDK for this scanner, the proprietary PSI software is the only fully finished piece that will run this scanner. This software, by the way, is brilliant in its simplicity. Even in “advanced” mode, it has only a few settings: what type of film (color, b/w, slide), how many frames per strip (4, 5, 6 or many), whether you want Digital ICE on or off (color only), and the roll number that will become the name of the folder when you save the roll. That’s it. The machine scans as much film as you want to give it, figures out where the frames are, does all color corrections without human intervention (unless you want to participate) and kicks out your choice of output (3 resolutions, JPG or TIFF, RAW or processed). It even reads the DX codes off of the film and gives each frame the name of the nearest barcoded frame number. Brilliant.
Buried in your program folder is something called TLXclient, which you can use for oddly-sized frames (such as half-frame 35mm and Xpan). It’s a little more geeky, but it lets you play with wide frames (a lot of the time you can get black all the way around a 24×36), play with the bit depth, and do other things that aren’t really central enough to PSI’s minilab mission.
Finally, the Pakons not generate information that you do not need. The first thing that a gearhead will look at is the scanning resolution. The maximum is 3000×2000 (6MP), which is an acceptable resolution for an 8×12 on a dye sublimation or inkjet printer, if not a Frontier.
“Wait? 4000dpi!” Most 35mm pictures don’t get enlarged more than 8×12, most in fact are just shown on computer screens these days, and for situations where you need to, you can always use a high-end desktop negative scanner, pay to play on Flextight, or have your work drum scanned.
If all you are out to do is quick proofs to see what is worth scanning with a much higher resolution machine – and just want 1500×1000 thumbnails, the F235 plus blows out up to 3,000 of those an hour, or roughly one per second. You would need a spotter to catch the negatives flying out of the machine. And a second helper feeding it.
But let’s be real here. If you for some reason believe that you need to scan every single picture you have, you will never get it done on a normal negative scanner that runs with a carrier and Vuescan or Silverfast.
What’s different about the F235 Plus?
The 135 and 135 Plus have a “dog bowl” form factor in which film travels in with the sprockets at top and bottom, around a curve, and out the other side. Negatives end up neatly in the tray. They are not as fast as the 235 and 335 series machines for a number of ergonomic reasons in addition to the slower transport speed.
The 235, 235 Plus, and 335 use a larger chassis (about the size of a large bread making machine) and take film straight through and out the back into a negative bin made of Lexan. The 235 Plus and the 335 are the speed demons, with the 335 — exceptionally hard to find in working order — edging out the 235 Plus by 20% with no ICE and almost 100% with ICE. They can take shorter strips of film than the 135 – down to two frames – though you may want to use a chopstick to nudge the strip to engage with the sprocket rollers.
But the real difference with the 235 Plus is that it uses a halogen light source and not an LED. Many people have made uninformed suggestions that this bulb is somehow difficult to find, expensive, or otherwise a problem. It’s not. You can access it by taking one magnetic-catch cover off the scanner.
The exotic-sounding “Solux” bulb is a actually a 12v, 50W EIKO MR16 (GU5.3) track light bulb whose only special parameters are that it has a 24 degree throw angle and has a 4700K calibration (so close to daylight). This bulb was not actually developed for the F235 series but was an off-the-shelf (and still current production) art museum track light bulb whose fitting, voltage, and wattage are identical to bulbs in lamps you probably have around your house. So even if you had to wait to buy a 4700k version for a whopping $10-14, you could march down to the local hardware store buy something reasonably close for $2 and be back in business in minutes. Witness:
So what? You ask. LEDs go 10,000 hours instead of 1,000. Why should we put up with a bulb that has to be replaced? One could always point out that 1,000 hours on this machine is 800,000 b/w negatives, which is several times more than anyone outside a professional photojournalist shoots in a lifetime.
But the real reason is color. A lot of early Kodak scanners ran on halogen light sources. Why oh why? It’s all about color. Kodak was always fixated on perfect color in all of its systems, and at the time that the F235 and F235 Plus came out, and even now, you can’t get a Color Rendering Index of 98 with LED. CRI is the measure of how even a spectrum a bulb produces compared to a reference light source, and until recently, LEDs have scored very low because they have holes in their spectral transmission. And if you are fixated on the quality of color through transparency film, the white LEDs that were in play in the Pakon era were nowhere near hitting the barely 90 CRI that LEDs are hitting today.
The other thing is that the F235 system is highly diffused, like a diffuser enlarger. LED light sources are very concentrated and often very unforgiving of other than perfect negatives. If you have ever compared the output from a Nikon LS and a Flextight (or a Sprintscan), you know that diffused light sources don’t multiply the retouching workload later.
So how did LEDs get into the 135 and 335? They were later machines, and as slide shooting went off a cliff, there was little call to maximize color rendition for that application (and even the declining use of film made the slower speed of the 135 completely livable). LED turned out to be fine for negatives (note that the 135 series did not have native chrome capability until a later version of the software, which might be employing its own methods to correct for the light source).
Today you could probably retrofit the 235 with a direct-fit LED bulb (query what might happen if you put the scanner in “dim” mode, though) or pretty any much light source. The machine calibrates itself to the light source on startup
But in general, the F235 Plus is a very fast platform that is easy to clean, does not twist your negatives around curves, and is more suitable to scanning several rolls, then correcting them all at once, then hitting the next set. The one downside is that it does have a fan, and so it is a little louder than a computer. Not 747 jet-engine loud, but still noticeable.
The only sad thing about the F235 Plus is that you might find that your life’s production of negatives zips right through, and after you scan all of the negatives in your family and from some of your friends, there are no more worlds left to scan, er, conquer.
I booted mine up after having it in the box for a while. I ran a few long rolls of film that I forgot about until after I moved. It’s magic. The machine is genius. But now what?
Odi et amo. Quare id faciam fortasse requiris — nescio, sed fieri sentio et excrucior!
The Imacon/Hasselblad Flextight series of scanners is a testament to the power of patents. Each is devilishly simple: negatives get sandwiched between a 400-series stainless sheet and a flexible magnetic sheet, bent around two big wheels, and run between a fluorescent tube at the bottom and a lens assembly and 3-line sensor CCD pointing down from the top. This is true of the cheapest Photo all the way up to the 949.
The variations in Flextights come in the larger models (not the Photo or 343). These have a zoom assembly on the lens that redeploys the CCD pixels to a smaller film width to give 5,700 dpi or more on 35mm film. Almost all Flextights, though, max out at 3200 dpi for 60mm-wide 120 and 220 film, as well as double strips of 35mm.
This article will address the operational differences between a model 343 and a Nikon Coolscan medium-format scanner. The 343 is the only reasonably affordable model that interfaces to FireWire and modern computer operating systems. Some aspects of the 343’s operation are the same as on the X5, the $25,000 champion.
Negative holders. The first fundamental difference between a Flextight and a Nikon is the design of the negative holders. Nikon’s FH-869 carrier uses clamp-down strips to grab the edges of the film and then uses a thumb-operated tightener to tension the film flat. This works most of the time, though it can be tricky to load. The Nikon carriers physically max out at a 6×18 strip, meaning that unless you want a crease in the middle of a frame, you can have a maximum of 3 6×6 frame, 2 6×9 frames, or 4 6×4.5 frames. The alternative (and extremely expensive) FH-869G glass holder sandwiches film between two sheets of glass. Because it does not need to grip the edges of the film, you can scan the entire width of the film. It is not as rough with the 6×18 limit, but it’s still a little risky.
The Flextight holders, by contrast, use magnetic pressure to hold the edges and then bend the entire assembly around a curve to totally flatten the negative at the one spot it is being scanned by the line CCD. Flextight holders generally do not have issues with super-long filmstrips because the ends don’t crunch up against anything (they do hang out of the carrier and/or scanner). Flextight holders, though, because they work best with support on all four sides of the film, are much more format-specific than Nikon holders are. Flextight holders don’t use glass, which also eliminates a dust surface. That said, you cannot get the full width of the film (with all the edge printing) on a Flextight because there would be nothing holding the film.
For most purposes, the Flextight is an easier choice for loading, though not cheap when you have more than the stock holders. The Nikon, though, excels for randomly sized bits of film and anything that is not a traditional 35mm or 120 frame.
Illumination. This is a big difference. The Nikon uses an IR-capable LED light source that can be used by Digital ICE to compute away most dust and scratches. This light source can be adjusted in intensity as necessary to penetrate dense negatives. The Flextight uses a cold-cathode tube (in the 343, it’s basically an off-the-shelf 6w daylight tube) whose intensity is not variable (scanning speed, however, is). The lack of Digital ICE is partially offset by the fact that the Flextight’s tube is a more diffuse light source that tends to cut down on the effects of dust and scratches.
Speed. The Nikon is much faster as a “proofing” machine, particularly with programs like Silverfast and Vuescan, which can preview and scan frames at many times the speed that a Flextight can. The Nikon (and similar scanners) use a positioning motor that addresses a an 18cm area and a stepper motor to advance the film across the scanning head over a 9cm area. Programs like Silverfast highjack the positioning motor to do a quick scan of the whole 6×18 preview area.The Flextight, for its part, moves so slowly that its operation is barely detectable until it hits the end of a scan and ejects the negative holder.
Negative size. There is a huge convenience factor in scanning a 6×12 or 6×17 negative in one pass without stitching multiple scans together (the Nikon has a positioning motor that can address 6×18, but the stepper can only do 6×9). You can set the scanner and pay attention when it kicks the negative holder out at the end. That said, the 343 only handles a maximum of 5 frames of 35mm film in a strip, though with an aftermarket holder, you can do two at a time. The biggest holder available is 58×184, which normally does 3 6×4.5, 3 6×6, or 2 6×9. The 6×4.5 capacity is a bit diminished compared to the Nikon where cameras space more widely (like the Fuji GS and GA cameras).
Focusing. The big difference between a Nikon and a Flextight comes in the focusing. Because negatives can be all over the place on the Nikon, it needs to focus – and you have to arbitrarily pick your focus point. The 343 avoids this by having focusing fixed at the factory (the grown-up Flextights can focus to a degree). As long has your holders have the right thickness of metal, focusing works great and without the clack-clack-clack of Nikon focusing.
Optical Path. The Flextight – like the Pakon 235 and 335 – has the CCD pointing down, through a lens, at the film. This has the effect of eliminating a dust surface and also helps keep things clean. The Nikon (and most negative scanners) turn light 45 degrees via a mirror that, depending on its care and feeding, might get dusty. Might.
Software. This is perhaps one of the weirdest comparisons imaginable. The Flexcolor software that comes with the 343 is about as basic an application as you can imagine. There are very few controls aside from original media type, curves, brightness/contrast, sharpening, and frame size. The most complicated thing about Flexcolor is understanding how to tell the scanner what holder you are using (picking the wrong one can lead to some strange noises).
For the Nikon, since Nikon Scan is deprecated, your choices are Silverfast (which is really powerful but really special in its user interface “innovation”) and Vuescan (virtually free but difficult to control and prone to blown-out, yet somehow specked, highlights). You can, of course, use Nikon Scan with Windows XP (and possibly 7). And speaking of XP, Silverfast 6.5 has wonderful medium-format frame detection with a Nikon scanner and Windows, not so much with Silverfast 8.
Equipped with the ability to recall multiple profiles and settings combinations, Silverfast seems to have much better capability to produce a usable scan without user intervention; Flexcolor seems to anticipate post-processing by the user, not in the least to correct the sharper-than-average dust and scratches. Oddly, Flexcolor defaults to unsharp masking at 250% – is this why the Flextight has such a reputation for crazy sharpness? Not really. The Flextight is no slouch set at 0, and the zero setting will give you a lot less film “grain.” More on this later.
Durability. There are three major aspects to durability. First, are the negative holders going to fall apart? I am fairly convinced at this point that Imacon, Nikon, and Polaroid designed their negative holders to be the weak link in what is otherwise bulletproof hardware. The Nikon FH-869 has little locking barbs that eventually wear out, and the Sprintscan 120 had a medium-format carrier with little locking pins that seemed destined for failure. Luckily, if that happens with the Nikon or Polaroid, you can just get a 3mm AN glass from Focal Point in Florida and use that instead of the top cover. Actually holds film flatter anyway. The Flextights, likewise, seem to have consumable carriers in the sense that the magnetic material will eventually experience fatigue and crack, particularly if handled roughly. Fortunately, there are Chinese replacements on Ebay that work perfectly for 1/3 the price of a Hasselblad replacement.
Second, is there a likely mechanical failure in the future? The CoolScan series of scanners has a phenomenally long service life. The Flextight looks almost too simple to fail.
Finally, what about bulbs? The LED light source in the LS-8000 has a lifespan measured in years of continuous use. The fluorescent tube(s) in a Flextight, provided that you can live with less color correction, cost a couple of bucks apiece and are easily installed by the user.
Relative performance. Over the next few weeks, I plan to run some hard comparative tests, but I’ll share some preliminaries. First, 2000dpi and up is where it becomes clear that your zone-focusing a 58mm on a 6×12 camera leads to some things being in focus and other things not. It remains to be seen how much more useful detail is actually generated going to 3200dpi (Flextight) or 4000dpi (Nikon). As someone who scans primarily TMY in 120, I can observe that the Flextight does not interact with grain quite as obviously as the LS-8000 and that the Flextight is a little more graceful when it comes to dealing with thick highlights. The Nikon in general creates more “grain” (or whatever) there with Silverfast, and Vuescan is very difficult to control in that area – and often ends up being worse. I plan to do some more testing with overexposed TMY and some older, denser negatives on things like Verichrome Pan. One thing that is clear on the Flextight is its ability to deal with not-so-flat negatives and equally resolve grain all the way across the frame.
One concrete comparison. Here is a comparison between a Flextight 343 and a Polaroid SprintScan 120 (this is an easy comparison because it does not require me to cut individual negatives to fit the Nikon carrier). The Polaroid here is being used with its AN glass carrier.
The test is a 320 pixel-high section of a Flextight scan and a 400 pixel-high section of a Polaroid scan (left side of the building). I equalized the visual contrast between the two originals (the Flextight was a bit contrastier out of the gate with its software’s default settings) and then scaled the Flextight image up to 400 and the Polaroid down to 320. N.B. that the Flexcolor software was set to zero sharpening, as was Silverfast for the Polaroid (well, inasmuch as you can really turn off sharpening in Silverfast).
You could make several observations here (yes, the negative is slightly rotated as between scanners, but straightening it would have messed up the resolution…)
First, at native resolution, the Flextight (upper right) is a tiny (and I mean tiny) bit sharper than the Polaroid (lower left).
Second, when scaled down (upper left), the Polaroid benefits from the automatic sharpening-on-resampling that Photoshop does. If you put an unsharp mask on the original Flextight image (upper right), you would get the same thing.
Third, when scaled up, it’s actually hard to see that the Flextight gives up anything to the higher-resolution scanner.
Finally, 4000dpi reacts really poorly with grain on Tri-X, almost as if the grain is at the Nyquist frequency for the scanner. Once the grain gets into the picture, scaling down makes it worse. By contrast, scaling up a 3200 dpi image does not result in even as much grain as a 4000dpi gets at its native resolution.
All of this is generally consistent with more casual comparisons between the 343 and the Nikon. Unlike the comparison between flatbeds with their “fake” bazillion dpi resolution versus real resolutions that are much lower, higher-end dedicated film scanners actually track very close to their nominal resolutions (a Nikon LS medium format scanner has hit 3,900dpi in German tests, for example). So one logical conclusion might be that as between a 3,200 and 4,000 dpi scanner on a relatively coarse-grained film, that last 20% is essentially empty magnification.
My takeaways. This observer would suggest that the Flextight story of superiority is true but not for the “sharpness” / film-flatness reasons that always seem to be bandied around.
First, as against a glass negative carrier, there is zero, zip, zilch to suggest that the Flextight is markedly superior to higher-end dedicated negative scanners. Film only needs to be flat enough that all of the grain is within the depth of field of the scanner lens and that it not be bucked so much that there be visible distortion. Where a scanner autofocuses, it focuses on the middle of the negative, not the edges that may sit lower or higher compared to the focus point. So when you get to the point that a scanner can focus on the center of the frame yet resolve film grain at the top and bottom edge as well as a Flextight does, you’re already where you need to be. Most glass carriers will achieve this, as will a standard carrier where the top latches have been replaced by a sheet of AN glass.
Second, scaling a Flextight scan up to 4000dpi tends to demonstrate that 4000dpi is not a quantum leap in resolution that bicubic interpolation cannot make up. Even 3200dpi on a negative will yield enormous prints (looking above, you will see that even the apparent size difference at 1:1 between 3200 and 4000dpi is tiny. No Flextight except the most recent X1 and X5 (both of which cost as much as cars) has better resolution for medium-format film than the 343 does.
Third, for situations where 3200dpi does not interact badly with film grain, the Flextight actually does better at original and upsized images than a 4000dpi scanner will. Grain aliasing is significantly reduced with T-Max 400. Progress.
Fourth, for oversized scans of 120 film, it is easier to get things flat in a Flextight carrier than a typical glass carrier for a dedicated negative scanner.
Finally, the Flextight is dead-quiet when running, which is a lot more than I can say for the rat-tat-tat-tat of stepper-motor driven units.
Scanners like the LS-8000 have their advantages too.
One, the Nikon scanners can scan negatives far faster, which means a big productivity increase when a big part of your scanning is previewing.
Two, the Nikon has Digital ICE, which can overcome film defects far more severe than the cold cathode light source of a Flextight can overcome. The flip side of this is that the LED light source in a Nikon is both permanent and harsher, which increases the contrast.
Third, glass carriers are far more flexible when it comes to scanning irregular, damaged, or strangely proportioned negatives. Flextight carriers have to be able to hold negatives on three sides to hold the negative flat – and that means that every deviant negative size requires a separate custom-made holder.
Finally, Flextights are not efficient scanners of mounted slides, and with an autofocus scanner, mounted 35mm slides are flat enough that the Flextight is not going to return a massive increase in flatness (more expensive Flextights can deliver far higher resolution, up to 8000dpi though – but for the cost of a higher-end Flextight, you could have a lot of actual drum scans done of your favorite work…).
Conclusion. It might be six one one, a half-dozen of the other. A Flextight is a great scanner for medium-format film, but it is not the most versatile (or in its most affordable forms, the most supported) dedicated film scanner. And if you have an LS-8000 or -9000, a Flextight will not necessarily rock your world.
Sony a6300 with Leica 35/1.4 Summilux-M ASPH and LM-EA7 II
Sony a6300: love to hate you
There may not be any point, six months after the fact, to writing anything about the Sony a6300 compact camera. Well, maybe there is. Sony APS-C cameras are something that Fuji fans love to hate. And what’s not to hate from their perspective? Sony doesn’t make cameras that look like old rangefinders or SLRs, Sony lords it over Fuji with sensors that are slightly ahead (Fujifilm buys sensors from Sony, so it is not going to get the pathbreaking product immediately), Sony lenses are supposed to be terrible, and despite all this, Sony still outsells Fuji by an order of magnitude. How could this be?
— Sony strengths relative to Fuji in the mirrorless arena
The two possible answers are video and AF performance. Video on the a6300 is nothing short of phenomenal: 4K, 120fps HD, and just about every type of video gamma geekery that you could want. The Multi-Interface Shoe allows for some interesting snap-on microphone options, including XLR and wireless. The worst thing anyone has said about the a6300’s video is that it has rolling shutter problems, and the answer to that is really, so what? It’s an artifact of any mirrorless camera when used for video. And since Fuji sources its sensors from Sony, you’re not going to do any better. In fact, no one outside the Fujisphere considers Fuji’s video in any way significant.
The focusing speed and accuracy a NEX/Alpha has always been somewhat incredible. Even back to the old NEX-5, Sony could make lenses that silently and smoothly achieve focus on faces. The a6300 with its kit lens posts some insanely fast times, and Sony’s claims about continuous focus tracking are largely true, at least as far as this author has been able to reproduce the right photographic, ahem, “needs.” In fast action, a camera with poor lenses but a responsive system does much better than a more ponderous camera/lens combination that misses the forest for the trees.
One thing that is clear from the dpreview.com tests is that with whatever mystery lenses the site used to test the X-Pro2 and A6300,* there is almost zero difference in image quality, anywhere on the frame.
*Never disclosing the lenses used is dpreview’s second-biggest failing. The first is retconning itself into the time before the internet and digital cameras existed. Sorry. That was a mistake. The first is allowing itself to be bought by Amazon. Then the second is retconning. Then the third is mystery lenses (apologies to Steve Martin).
The A6300 is fairly easy to handle. The grip section of the camera is substantial, and in general, it is easy to operate. No one, though, understands what the second command dial is doing on the top deck. It’s not comfortable to use with the camera at your eye. Controls are snappy and solid, as is the general build.
The A6300 has the latest OLED high-density electronic viewfinder that features a 2-axis level (pitch and roll) and more information display possibilities than you want to admit you want. Battery life is helpfully provided by percentage (and if there is one nice thing about Sony batteries, they are good communicators. Shooting does not black out in continuous mode. The EVF senses heat (infrared radiation); hence, its eye sensor does not react to glass-lensed glasses or sunglasses. If you don’t like the EVF, there is a big LCD on the back. Knock yourself out.
This is mostly unchanged since the a6000. The big thing is silent shooting, which uses a front and back electronic curtain (you can also choose electronic front or mechanical front). Silent shooting has two failure modes: first, in any situation with fast-moving objects, the progressive read of the sensor will cause typical “rolling shutter” artifacts. Second, dimmed LED lights (dimmed at the wall switch) flicker, even at full brightness, and can cause light banding in the finished frame (rolling shadow).
— Legacy lenses
One big note is that it is not particularly easy to engage viewfinder magnification on a shot-to-shot basis. You either have to learn to live with focus peaking or slow way down if you want to focus older SLR lenses, for example.
— Accessories and cutting corners
If you are accustomed to older NEX cameras, you will marvel at how Sony expects you to charge this camera with a USB connection to something else. The better solution is the Sony BC-TRW, which is a microscopic dual-voltage charger. It actually has four charging indicators (1-3 and off – meaning “fully charged.”). But yes, you still get a useless camera strap in the box.
An exit from the closed system
The problem with APS-C camera systems, whether Sony or Fuji makes them, is that they are closed, highly proprietary systems. You can’t stick a Fujinon on a Sony; you can’t get a Sony Zeiss lens onto an X-Pro2. Change systems? Get ready to pay the price when you sell your old system’s lenses.
There are two tired retorts:
- But the system has all the lenses you’ll ever need.
- Why don’t you just mount legacy lenses on an adapter?
The first argument is disposed of easily: what if you don’t like the one lens with your preferred angle of view and preferred maximum aperture? What if you don’t want to shell out for new lenses? What if you need the money for booze?
The second fails due to the kludge factor. Yes, it’s possible to mount other lenses on these bodies for use with cheap Chinese adapters and your old lenses. It’s also generally miserable. Both Fuji and Sony allow focus magnification, but Sony makes it difficult to use when a non-Sony lens is mounted. Both makes have focus peaking, but that’s not as definitive as you think. And although Fuji offers a phase-detect driven split-image manual focusing function, it’s not that much fun and not that fast to use.
The “out” provided by Sony was to enable phase-detect autofocus with third-party lenses. This enabled things like the TechArt LM-EA7 II adapter, which in theory allows the autofocusing of any M mount lens (or lens that can be adapted to M, provided it physically fits the adapter). If this works, it would be a game-changer, since it would bypass the usual foibles of adapted lenses (focus difficulty and inaccuracy of focus peaking being two big ones). Is this true?
The good, the bad, and the ugly with the LM-EA7 II
The adapter comes in a nice, foam-padded box and includes a NEX/E-mount body cap and rear lens cap. This is a nice touch, since people who bought the a6300 with a kit lens will have neither.
The good news is that with the sweet spot for Leica lenses: 35-50, the LM-EA7 works like a charm. The noise is a faint whirring, and the Sony phase-detect system fairly effortlessly computes and reaches the focus point (provided, of course, that your lens would ordinarily need 4.5mm or less of travel between infinity and minimum focusing distance).
- Focusing is through the lens, at shooting aperture. ***This forces the camera to automatically adjust for focus shift on fast lenses, again making the a6300 more accurate and repeatable than a Leica M body, which can only have accurate focus at one aperture.
- The camera plus adapter can focus on an off-center subject using (for example) wide AF. Face recognition works with this adapter, even though the adapter supports phase-detect only. ***This is significant because it means that the a6300 can more accurately focus fast Leica lenses on off-center subjects than a Leica body can.
- The camera plus adapter rarely misses, even off-center. In fact, the focus with things like the 50/1.5 ZM Sonnar (the modern version) is better than can be achieved with a rangefinder (naturally, due to focus shift).
- The adapter is serviceable with 75mm and longer lenses, provided that you pre-focus to somewhere at least near the expected focus point.
- The adapter, by virtue of its inbuilt extension, gives you slightly closer close focus with 35mm and shorter lenses.
- There is little or no color shift with adapted wides. Depends on the lens, but even the ZM Biogon 4.5 seemed to do ok.
- Flash works with the adapted lenses.
- The multi-shot vibration-reduction mode works (JPG only).
- The weight limit for the objective assembly (lens plus any adapters to M mount) is 750g. This is well beyond what you need for almost any Leica-mount lens and covers most DSLR prime lenses (if you go lens – to M adapter – to LM EA7 – to camera.
- The artistic effects, such as “Sad Clown with Single Tear Airbrushed onto Sweatshirt” still work with adapted lenses.
Now, what’s the catch? Well, there are seven.
- PDAF does not work for video, and the adapter does not do contrast-detect.
- Due to some clear limits in the Sony PDAF software (which is probably set up to look for big focusing changes), wide lenses (≤21mm) and lenses with maximum apertures of f/4 or smaller do not focus well. Granted, why do you need AF with these lenses?
- The motor part of the adapter hangs below the camera, making it hard to set the camera down. This is not entirely negative because it also makes a nice grip.
- Not all SLR mount to M mount adapters work. In general, you have to use the Leicaist versions because they taper enough to miss the motor unit. Konica AR is one of the couple that work with the adapter, and even then, it’s just the typical Chinese adapter with a relief milled into it to clear the autofocus adapter.
- Taking the camera’s aperture setting off f/2 or 2/8 tends to cause overexposure.
- The system for selecting and recording lens-specific metadata is confusing, pointless, and possibly both. Your best word may be to record everything as 15mm.
- It does take a toll on your battery.
Tips and tricks
- Disengaging AF. For some reason, there is a lot of internet kvetching about how it is difficult to disengage AF. This is probably based on old firmware that requires you to use Aperture Priority and turn to a small f/stop. It is actually very easy to disengage the AF temporarily. If you press and hold AE/AF-L on the a6300, the adapter will park at its “infinity” setting, the focus peaking will come on, and you can then focus manually. When you let go of the AE/AF-L button, the adapter goes back to normal AF operation (make sure the lens is set to infinity before you do this!).
- Quickly overriding face-detect or wide area AF. If you have the camera set to wide AF, and you press the center of the back wheel, it will go into spot AF, center area only. It will also automatically focus in that zone. There are many possible green boxes, so it’s not like spot AF – but it suffices in most situations where you need an arbitrary focus point.
- Minimum focusing distance. With a travel of 4.5mm, and the lens set to infinity, the adapter does not have extension enough to reach minimum focusing distance with any lens over 50mm. The slight exception appears to be some zooms, since their designs often obviate a direct relationship between focal length and extension while focusing. Minimum focusing distance, though, is all in your mind with the A6300, whose narrower angle of view causes you to back up to get the same field as with an FX/35mm camera.
- Prefocusing longer lenses. With long lenses the quickest and easiest way to get to a range where you can achieve focus is to press AE/AF-L (which parks the lens), turn focus peaking on, and focus to a point where focus is just behind the intended subject. Once you are there, let go of the AE/AF-L button to reactivate AF. Because you focused behind the subject, and because the adapter extends (thereby moving the focus point closer to the camera), you have now put your lens exactly in the right place. Needless to say, the longer the lens, the less frontward subject movement this technique will tolerate.
- Marking your close-focus point with long lenses. If you habitually shoot at 1-1.5m, find the right “parked” focus distance (see above) and then mark it on the focusing ring with a dot of colored paint.
Yes. In general the performance of this adapter depends on two major variables: lens weight and maximum aperture. The former degrades focusing speed; the latter, certainty of locked focus. As noted above, Hexanons were tested due to the availability of an ulterior SLR adapter (plus I had a bunch sitting around).
- 35mm f/1.4 Summilux-ASPH M (pre FLE)
- 40mm f/2 M-Rokkor
- 50mm f/1.1 MS-Sonnetar
- 50mm f/1.5 ZM C-Sonnar
- 50mm f/1.5 Jena Sonnar (prewar)
- 50mm f/2.0 M-Hexanon
- 50mm f/2.4L Hexanon
- 50mm f/2.8 Jena Sonnar (with Amedeo dual-mount Contact to Leica adapter)
- 50mm f/2 Jena Sonnar collapsible prewar
- 50mm f/2 Carl Zeiss (Opton) Sonnar, postwar
- 75mm f/1.4 Summilux-M (prefocus)
- 90mm f/2.8 M-Hexanon (prefocus)
- 10.5cm f/2.5 PC Nikkor (LTM)
- 40mm f/2 Hexanon (AR) (Konica mount via Leicaist adapter)
- 57mm f/1.2 Hexanon AR
- 35-70mm f/3.5-4.5 Zoom-Hexanon AR
- 85mm f/1.8 Hexanon AR
Kinda. For wide-angle, medium aperture lenses the adapter does not do so well because Sony’s phase-detect AF isn’t set up to split hairs.
- 24mm f/2.8 Hexanon AR
No? Here, the details are too small and/or the depth of field too much to get an easy lock (or sometimes, any lock) with the A6300 [edit note: this appears to be due to the camera’s having difficulty in deciding where the focus point should be – and even in its “spot” modes, the a6300 is picking a focus point]. The behavior on these is more deliberate focusing, almost as if the camera had switched into contrast-detect].
- 18mm f/4 ZM Distagon [too wide, too small an aperture]
- 21mm f/4.5 ZM Biogon [too wide, too small an aperture]
- 21-35mm f/3.4-4.0 M-Hexanon Dual [too wide, too small an aperture]
- 50mm f/1.5 Carl Zeiss (Opton) Sonnar [aberrations that Sony AF can’t understand?]
The Sony A6300 is a pretty formidable camera for video and not a slouch for stills provided either that your style does not exact ultra high performance from kit lenses or provided that you are willing to invest in better Sony or Sony/Zeiss glass.
The LM-EA7II may never be good for sports or high-intensity moving work, but it provides some fun with old lenses, or as much of it as you can take! It’s actually a bit irritating that I did not have an A7-series camera on hand to try it.
This simple feeling… is beyond V’ger’s comprehension. No meaning… no hope… and, Jim, no answers. It’s asking questions: ‘Is my kit zoom lens good enough?’
In reality, we actually know little about zoom lenses except that the best ones (from a numerical standpoint) are very large, heavy, and expensive. Once you move into the enthusiast and kit versions, the question of whether or not they are good (or, more to the point, useful) is complex, subjective, and somehow optimistic.
The struggle of zoom lenses, since basically forever, is designing a multifocal, focus-maintaining lens that is at least as good as any lens of the focal lengths covered, without being massively heavy or unimaginably expensive. This struggle is driven by four conditions of design, manufacturing, physics, and software.
- Design. Fixed focal length lenses have an inherent advantage because they are always going deliver high performance at low prices. Such lenses require computations at one focal length, have fewer parts, need less assembly labor, and require less glass. A zoom has to be good at a theoretically unlimited number of focal lengths between two extremes and has to maintain focus as it focal length changes.
- Manufacturing. The difference between a good lens and a great lens can be 0.01mm. Zoom lenses have numerous glass and precision molded plastic elements that have to work in formation at an infinite number of focal lengths between two extremes (say 14mm and 24mm). It is more difficult to make larger-diameter glass elements with great precision, and the more mechanical linkages exist in a lens (for example, ones that maintain focus through focal length changes), the more tolerances add up. Sometimes low moving mass and “slop” is built in to make lenses focus faster.
- Physics. More glass means more flare and dispersion, and zoom lenses have tons of glass. Flare can be mostly tamed via muticoating, but even so, dispersion adds up with the element counts. A 13-element lens with modern multicoating (losing ~1% per air-glass surface) can have a total loss of 25% of all the light coming into it.
- Software. This enters the picture in two ways: focus correction and image correction. On DSLRs and some mirrorless cameras, the AF Fine Tune function helps correct focus errors that occur with particular lenses and phase-detect autofocus. The difficulty with zooms is that the nature of focusing errors can change with each focal length, and dialing in a correction for one focal length for a lens can greatly improve images there but degrade images shot at other focal lengths. The second limitation arises in software correction of lens aberrations (distortion, vignetting/falloff, and sharpness). One cheat (or innovation, depending on how you look at it) is to let the camera make corrections that the lens design itself does not permit. This provides more freedom to design smaller, lighter, and cheaper lenses. But you can’t really reconstruct data that isn’t there – or bend it infinitely.
Why many enthusiasts have been suspicious of zooms
It mainly seems to be a thing with people 40 years old and up, who remember the bad old days. As to the history, in the 1970s, optical correction was not what it is today, and zooms got a really bad rap because things like the 43–86mm Nikkor were convenient but not optical superstars. The original zooms were two-touch, which allowed the easy setting of focal length and focus with two separate rings. If a lens mostly held focus as you changed focal length, it was a true “zoom;” if not, it was a varifocal (Nikon is, and long has been, an offender in calling varifocal lenses “zooms”). Zooms of that era were difficult to design, and it was a time where lens design was transitioning to more computerized methods. They worked for a lot of purposes, but given the natural male inclination to over-spec and compete with equipment, they were not taken very seriously.
In the late 1970s, manufacturers went to one-touch, where you could adjust focus and focal length with the same grip. The temptation was to conclude that you could just re-zoom, re-compose, and fire away, but the reality was that focus drift followed focal length changes, and if you didn’t bother to refocus, you could get slightly soft pictures. One-touch zooms also suffered from zoom creep: eventually, as the lens loosened up, pointing the camera up or down would cause the zoom mechanism (governed by front-back movement of the ring) to move on its own. This too, did not help perceptions, though there are some very good zooms of the 1970s and 1980s, including some third-party offerings like the Vivitar Series 1.
SLR manufacturers rediscovered the two-touch in the 1990s, when it became an advantageous design for autofocus lenses (an AF motor could turn a focusing ring but not also a zoom ring). And that is when they all backslid into selling varifocal lenses as “zooms;” the assumption being that the camera’s AF would correct the focus anyway. Although questionable from a marketing standpoint, autofocus helped assure that the new “zooms” would be
The rise and fall of zooms (1999-2005)
If zooms had a heyday, it was from the late 1990s to the mid-2000s. Several things came together to make this happen:
- Vis-a-vis prime lenses, zooms were more heavily telecentric than prime lenses. In simple terms, their design created the straight-on light rays that digital sensors like.
- Advances in lens coating took away some of the performance penalties of using a large number of lens elements for image correction.
- Prime lenses were getting little in the way of updates. This meant that the best ones were standing still, and others did not work as well as zooms with digital. Consider that it took Nikon 50 years to update the formula of its 50/1.4.
The prototypical lens of this era was the AF-s 17-35mm f/2.8 Nikkor, which was designed for the D1 cameras but was usable with contemporary film cameras too. This lens outperformed most primes within its focal length range, was solid, fast-focusing, and very popular.
But just as every pendulum swings, the 2010s to the present are where the optical (but not necessarily total) performance level of cheap zooms took a little bit of a dive.
- In a market with softening demand, maintaining competitive MSRPs for entry-level cameras and lenses required simpler and cheaper designs.
- The processing power of digital cameras increased to the point where it became possible to correct for distortion, light falloff, and sharpness in-camera.
- Increased emphasis on video, especially from mirrorless, demanded lenses that could focus quietly and continuously, driving toward lower moving mass.
- The move by the market toward camera phones meant that the “burden” associated with separate cameras had to be minimized.
In other words, the ethic was (and is) using technology to make cheaper, lighter, and easier-to-make lenses acceptable, not so much to make good lenses better.
At the high-end of the lens lines, updated primes also began to exert pressure on the more expensive zoom lenses, especially where trends push toward small and light.
What is the quick and dirty way to identify “good” zooms?
When shopping for a zoom lens with high potential, this is the general hierarchy to predict (with some but not total certainty), where a lens fits on the performance curve.
— By effective aperture
- Constant f/2.8 aperture – this generally means a pro-level lens. It also means big, heavy, and expensive.
- Constant f/4 aperture – this is the high-end amateur or lower-end pro zoom. It takes a lot of engineering to keep the aperture constant on a zoom, and this type of lens generally has the best balance of performance, weight, and cost. Canon and Nikon both make this style of lens. These are not cheap, but they are much easier to live with than monster pro zooms.
- Variable f/2.8-4 or 3.5-4.5 (1 stop aperture shift) – this lens type of lens generally has been optimized for compactness over speed. F/2.8 is really a bragging right; it’s only half a stop faster than f/3.5.
- Variable f/3.5-5.6 (or f/6.8) (1.5+ stops) – when a manufacturer lets the maximum aperture float this much, it is generally an indicator that you should not be expecting world-beating performance. But you will get a lens that will do well in most circumstances and not break your back or budget in the process. These are actually the lenses that are fun to shoot with.
— By zoom range
This may sound deceptively simple, and maybe it oversimplifies, but it is a fairly good bet that where a modern lens has a zoom range of greater than 2.5-3x from short to long, it is is probably a “convenience” zoom rather than one oriented toward absolute performance.
Can “not-good” be good?
Yes. Despite having performance a notch or two below pro lenses, kit zooms can be quite good within some limits. First, in an era where photos are overwhelmingly likely to be shared on social media, and not printed, kit zooms are actually complete overkill. In fact, anything beyond an 8mp iPhone 6 might be complete overkill – the same way that a lot of expensive pro equipment in the 1970s was used by amateurs to generate 3.5×5 inch prints.
Second, even if you print, you only need about 6mp of real-world performance out of a lens to print a nice 8×10, which again is bigger than most prints made today. “Real world performance” means type of system resolution that DxOMark measures (lens performance plus body performance, moderated through focus accuracy). That might be a 12mp body and a midrange lens.
Third, you will still get really good results – though optimum performance may be ~3 f/stops from wide-open (f/8 vs. f/3.5 maximum) as opposed to (f/4 vs. 2.8 maximum). There is an old adage, “f/8 and be there,” but in reality, once a lens is stopped down even to f/6.3 (let alone 8), even a plastic meniscus lens will have performance approaching an expensive coated lens. Like vampire tears, the pro-lens advantage evaporates in bright sunlight.
Finally, especially for mirrorless cameras, some lenses have video performance that vastly outweighs whatever perceived deficiencies they have for other purposes. For example, you might conclude that the 16-50mm f/3.5-5.6 power zoom lens that comes with a Sony a6300 is terrible. It’s not terrible for still pictures (in no small part because it is one of the fastest-focusing lenses ever invented), but it really shines in video, where it can silently and reliably track moving subjects without introducing noise. And it’s also tiny.
To wrap up: the performance, utility, and fun factor of zoom lenses is actually pretty subjective. Try as many as you can. Pick the one that you like the best, so long as it does what you need it to do.
The 28mm M-Hexanon, like the its focal length, occupies a strange space that is neither here nor there. I have never had good luck with 28mm lenses, if only because the angle is a little wide to be comfortable for close shots of people and a little narrow for some of the landscapes I shoot.
Only on the verge of selling mine (for lack of use since way back when I had an M8) did I shoot a bunch of tests with an M typ 240. This particular lens had been recollimated to be at exactly Leica spec (most lenses made before the M8 were not set up to hit the center of a flat sensor).
This piece will not editorialize much but instead show it like it is. Which is quite good, far better than I had remembered.
First, the obligatory “how sharp at a meter” exercise. This is f/2.8.
Next: does it shoot good pictures of children? Yes.
E poi – how is the bokeh? Strangely, it’s actually really good, especially for a wide lens. Here is the sequence f/2.8, 4, 5.6, 8.
Sunstars? Got ’em too. Here is f/2.8-8 (clockwise):
Gross resolving power (again, f/2.8-8):
And now, we laugh at your Elmarit-M!
Flare resistance, same range:
Another test; can’t remember why. Seemed like a good idea at the time.
The strange thing about infrared photography is that it represents a very small piece of photography in general, but there is apparently no space in photography so small that it can’t support some form of snobbery. And in infrared photography, it is the idea that there is “near” infrared versus “true” infrared. Not only does this convey a false sense of exclusiveness to people who shoot 850nm and up, it’s also not accurate.
When you shoot a normal camera in daylight, there is a small amount of infrared contamination – it’s about 10 stops less than daylight, or coming in at 1/10 of a percent or 1/1024. Tiny, even on something with big infrared contamination like a Leica M8. So any particular shot is overwhelmingly lit by visible light.
A dark red filter (RG630, 091, 8x, #29, etc.) flips the equation: the average blockage of visible light is 3 stops, or 75%.The reality is that most skylight scenes are predominantly blue, and this filter cuts a lot more than three stops. Even if you are shooting objects that are middle grey, these filters reject 75% of all light – meaning that when you shoot them on a camera with no other IR rejection, deep red and infrared light make up 75%+ of the light. The false color you are obtaining is infrared light that is still being blocked in part by green and blue squares on the Bayer filter on the sensor.
The case for the “near” classification is even weaker with the 695-720nm filter (RG695, 092, R720). First, consider that wavelength ratings on filters are at the 50% mark. So a 720nm filter really starts passing 100% of its light around 750nm. On a short exposure, which you will see is commensurate with a normal visible-light exposure, infrared light is providing almost all of the illumination.
Going the other way, “true” infrared is not that advantageous – and may not be something to commit to in an IR conversion. First, even though the Bayer filter does not affect 830nm+ light, the decoding algorithm in your camera still compensates for it. So if you dump the RAW file into DCRAW, what comes out still has something of a checkerboard pattern. Second, the false color effects generated by mid-band IR actually allow for more contrast control because there are multiple channels of useful information (and with 850nm+, you really need this, since everything likes to come out bright white in sunlight, especially around dusk). Eliminating this effect means that you have less ability to rebalance the tones in a scene.
None of this is to say that it’s good to meet one form of snobbery with another technical one. But let’s just keep the infrared world big, okay?
# # # # #
Above: Zeiss Jena 5cm f/1.5 Sonnar (prewar; 1937 example of the 1932 design) on a Leica M typ 240 with an Amedeo dedicated 50mm adapter. This particular lens is almost 80 years old.
1. The story
The derivation of the trade name “Sonnar”(which may have less to do with Sonne than being a portmanteau of Sontheim am Neckar) reminds one of the the way that the Mr. Sparkle is a joint venture of the Matsumura Fishworks and the Tamaribuchi Heavy Manufacturing Concern. Be this as it may, the Sonnar had but one goal in life: crush Leitz’s fast lenses in an era where ISO 12 film was the norm. And that it amply did. Even today, the performance of this uncoated lens is impressive.
When the Sonnar arrived in 1931 (f/2.0) and 1932 (f/1.5), the Tessar (or Elmar) was the gold standard in normal lenses: a well-corrected triplet that, in an era lacking anti-reflective coatings, sneaked in a little more correction by cementing two pieces of glass together. When it came time to exceed f/2.8, though, the real competition began:
- In 1889, Paul Rudolph, working for Carl Zeiss, determined that the best balance of contrast, correction, and cost was a three-element lens called an anastigmat (trade name: Protar).
- In 1895, Rudolph invented the Planar, which was a highly-corrected symmetrical lens. It was shelved soon thereafter, no doubt on account of the low contrast that occurs with many air-to-glass surfaces.
- In 1902, Zeiss released the Tessar, which provided more correction than an anastigmat (by adding a fourth element glued to the the third) without increasing the number of air-glass surfaces. The Tessar was technically inferior to the Planar, but it did not have the two extra air-glass surfaces (each robbing 10% of the light, compounded).
- In 1925, Max Berek modified the Leitz Elmax, which had a 1-1-3 (cemented) arrangement into the Elmar, which bore a heavy resemblance to the Tessar, allowing for a good 35mm-format lens with fewer elements and less assembly labor.
In parallel universe (but still orbiting around Zeiss)
- In 1916, an American (Charles Minor) started adding elements to the triplet, but just in the front. The result was the Gundlach Anastigmat, which had a blazingly fast f/1.9 aperture. The contemporary ads show that this was actually a cine lens.
- In 1922, Ludwig Berthele, working for Ernemann (of Ermanox fame) continued elaborating this into the Ernostar, which became on of the first plate lenses to hit f/1.8 (in 1924).
Scan used by permission of Peter Naylor.
- In 1926, Zeiss bought Ernemann and acquired Berthele in the deal.
- In 1931, Berthele made the first f/2 Sonnar, which was a new lens with an old name. It was for 35mm format and had a 1-3-2 arrangement, with the second and third group cemented together.
- In 1932, he made the f/1.5 version, which added an extra element to the rear group.
- In 1936, caught off-balance, Leica licensed the Xenon, a symmetrical Double-Gauss design from Schneider, licensed in turn from Taylor-Hobson in England (the Series 0), in turn had been cribbed from the 1896 Planar.
- In 1944-1945, the Zeiss plants were bombed back to the stone age.
- In 1949, the Xenon was updated with coagulation-style lens coatings and became the Summarit.
- In 1950, the Zeiss-Option Sonnars came out with a new computations.
The circle was now complete: the entire high-speed lens space was dominated by Zeiss designs and would continue to be – for pretty much all time. When you stop and think about it, until the advent of things like the 50/1.4G Nikkor, the history of high-speed lenses had been nearly nine decades decades of Sonnar and Planar clones.
Why did the Sonnar do so well? It’s not so complicated. It all boiled down to the number of air-to-glass interfaces. The classic triplet (the anastigmat) represented the best balance between correction, contrast, and cost. But adding more elements (to get more correction) meant more air-glass interfaces. And that meant less contrast and more flare. Zeiss increased the correction by cementing additional elements together to make a total of three groups. Leica could not do this because it did not have the intellectual property rights to do so. During WWII, Zeiss dabbled in coating its super-speed lenses, but it was not even really necessary given the Sonnar’s high transmission.
2. Using one today
These days, the Contax rangefinder is almost dead, 35mm film photography has gone all “Tony-Bennett-in-the-late 1990s,” and so the only place you’ll likely be using one of these is on a Leica body. Fortunately, it’s pretty easy to do. You just need the appropriate adapter. These are not particularly expensive for APS-C (although they do incorporate focusing helicoids); they are more expensive for Leica cameras because they need a mechanism to translate the movement of a 52.3mm lens to a camera whose rangefinder mechanism wants a 51.6mm normal lens (how two German companies known for their precision could get so sloppy about what constituted a “50mm” lens is baffling – but being a big-name German optical company means never having to say you’re sorry….).
By far, the best adapters for Leicas are made by Amedeo Muscelli, and of those, the best is that dedicated adapter for Contax 50mm rangefinder to Leica M. This is not the usual lens with the reproduction of a Contax helicoid and focusing scale; rather, combines with the lens to make a unit that looks a bit like an old Elmar (allowing, of course, for the streamlined – dare we say phallic) shape of a Sonnar. The dedicated adapter focuses in the same direction as a Leica, at almost the same rate of distance change per unit of turn, and it has a lever, which can be critical if you are using a collapsible Zeiss lens (since with a traditional adapter, you are grasping the lens barrel to focus – something you can’t do with a collapsible lens). When your lens is dialed in, this adapter focuses amazingly accurately right down to 0.6m – a lot closer than any Contax did.
And how do you dial one in? If your lens is front-focusing, the simple answer is to remove the lens cell from the Contax barrel and unscrew the rear group slightly. It is never more than 1/4 turn, and you can maintain the setting by wrapping the threads of the rear group in Teflon tape and screwing it back in. Back it out about a 1mm (circumferentially) at a time, and check the focus on near and far objects. Do note that where a Sonnar has a lot of focus shift, you’re going to have to choose whether to
- Have the lens front focus at f/1.5, reach focus at f/2.8-4 ,and hit focus at f/5.6 and smaller
- Have the lens focus dead-on at f/1.5, miss at f/2-4, and become usable again at f/5.6 and on.
The first observation is that finding a prewar f/1.5 Sonnar that is not totally trashed is not particularly easy. Fortunately, at least cleaning marks are not an issue on uncoated lenses unless someone used Soft Scrub as an optical cleaner. Which does happen from time to time.
The second is that in the central part of the frame, this lens is very, very sharp. It has decent performance at f/1.5 if you optimize for that aperture, loses precise focus from f/2.8-f/4, and comes roaring back at f/6.3. If you keep with the original collimation (or an approximation of it, you get really sharp pictures around f/2.4, getting better through f/8.
The third is that the coatings on postwar Sonnars are not moving the ball much in terms of performance. Because this was the last fast Sonnar I obtained, it’s easier to compare this to the 1961 Carl Zeiss version. The 1937 model performs similarly in most ways. It is very slightly softer, with contrast that is almost at the level of a 1977 Jupiter-3.
Flare is only slightly improved by coatings, and they do not resolve the “rainbow circle” flare that afflicts every Sonnars (even multicoated Sonnetars) when a point light source is just out-of-frame. The one unique failure mode is strong side lighting (from the looks of it between 75 and 90 degrees to the lens axis), which can cause a veil across the entire surface. This also happens to a lesser degree with postwar Sonnars and copies, just not quite to the same degree.
Overall performance is strikingly close to the postwar, if you allow for slightly improved spherical aberration on the older lens. The postwar version is a tiny bit sharper, but seems clear that this comes at the expense of bokeh, which goes from smooth disks to ringed disks. If you care about that stuff.
4. Roy Batty
The f/1.5 Sonnar was the proverbial candle that burned twice as bright, and by 1962 it was essentially extinct. The “twice as bright” part is doubly applicable to the 1960-1962 Car That it was so widely copied in the postwar era is puzzling. Granted, German patents were handed over to the Japanese, but in terms of sheer performance with coatings, there were already better lenses to copy (like the Xenon). Canon, Nikon, and Zunow all made their own versions. The Soviets made one too. Perhaps there was a “prestige” element to the Contax that was desirable to copy (though you would not have the all-important brand name). Or perhaps there was something about the mechanical design of a 3-group lens such that the cost of machining extra parts for 6 groups cost more than triple-cementing rwo groups. The world may never know. The fetishization of the Sonnar did not really get started until the mid-2000s and by then, it was based more on imperfection and “look” than a perception that it was actually better.
The prewar f/1.5 Sonnar is a worthy lens, though its relative scarcity does not exactly make it a value leader compared to postwar variants. As with any 50mm Sonnar, as long as you take care to control the placement of light sources, it can be another creative tool, if not a broader-use lens.