This is an article originally written in 2001; with a lot of updates.
How did these things get started?
The former Fujisawa-Shoukai had quite a bit of pull over Konica. Recall that by 1992, Konica had made what was seen as its last serious film camera, the Hexar AF, with its legendary 35mm f/2 lens. F-S, as we will call it here, commissioned in 1996 a run of Hexar lenses in Leica thread mound (LTM). This was long before the what people in the U.S. called a “rangefinder renaissance;” in fact at the time, very little in LTM was being produced in Japan, with the exception of the Avenon/Kobalux 21mm and 28mm lenses.
The first product of this program was the 35mm f/2L Hexanon, which looked like this:
This lens is simply a clone of the Hexar AF lens, right down having the same filter size. The coatings look identical, which is not a surprise. Consistent with some other contemporaneous LTM products, it did not have a focusing tab. On close inspection, the scalloped focusing ring looks like that on a Canon 35mm f/2 rangefinder lens, or more contemporaneously, the 21mm Avenon/Kobalux lens. The chrome finishing on an alloy body is reminiscent of modern-day ZM lenses. None of this, of course, will disabuse you of the notion that the Japanese lens production industry revolves around common suppliers. This lens shipped with a black flared lens hood (no vents) and a bright sandblasted chrome “Hexanon” lens cap that fit over the hood.
F-S would then go on to commission the 50/2.4L (collapsible) and 60/1.2L Hexanon lenses. The latter is famously expensive now; I have an email from F-S where it was 178,000 yen (about $1,400). The 50/2.4 will get its own article here.
In 2000, around the time that Avenon was re-releasing its 21mm and 28mm lenses as “millennium” models, F-S had another run of the 35/2 made. These were at least superficially different from the silver ones:
- At the time, black paint was all the rage, so the lens was executed in gloss black enamel and brass. The enamel in the engravings is almost exactly the Leica color scheme.
- The filter size decreased to 43mm, the aperture ring moved back, and the focusing ring thinned out to give the impression of “compactness” and justifying the “ultra compact” – UC designation that was historic to some Konica SLR lenses.
- The focusing mechanism changed to a tab (which helped justify the thinner focusing ring and lighter action).
- The coatings changed to a purplish red to help support the notion of “ultra-coating.” As you might know, multicoating can be customized for color.
The close-focus distance (what would be the third leg of a UC designation) and focusing rate of the helicoid (0.9m to ∞ in about 1/4 turn) and overall length did not change. The new lens was priced at 144,000 yen, which in dollars would have put it at just under the cost of a clean used 35/2 Summicron v.4 (at the time, these ran from about $700-1,200) and about half of what a Leica 35mm Summicron-M ASPH would cost.
Handling versus Leica lenses
Since both of these are optically identical, it might make more sense to discuss the ways in which these are similar to, or different from, the vaunted Summicron v4 King of Bokeh License to Print Money®. They are both like the Leica version but in different ways.
The UC has the same smooth tab-based focusing as the Summicron. It is very smooth and fluid. That said, the aperture ring is very “frictiony.”
The original L has a focusing feel a lot like a Canon RF lens, owing to the similar focusing ring, which has more drag and no tab. The aperture ring, however, has the same “ball-bearing-detent” feel as the Leica.
The overall length of all three lenses is similar, though as noted above, there is something of an illusion that the Leica and UC are smaller than the L.
The Konica lens, like the Hexar lens it was based on, is a clone of the 3.5cm f/1.8 Nikkor rangefinder lens, but for all practical purposes, the Hexanon is the same lens as the Summicron 4. As you can see, there is a very smooth falloff from center-to-edge wide open and pretty much eye burning sharpness at f/5.6,
Whoah. That looks familiar! Below is the Leica 35/2 v4 as shown in Puts, Leica M-Lenses, their soul and secrets (official Leica publication). Except the Summicron’s optimum aperture is a stop slower.
On interchangeable-lens bodies, all three lenses have the same focus shift behavior, requiring a slight intentional back-focus at f/2 and front focus up to f/5.6. It’s not like on a 50 Sonnar, but it’s there.
The original chrome version is a lovely lens and a nice match for chrome Leicas, at about 1/3 the price of a chrome Summicron v4 (yes, they exist…). If you like Canon lenses, you’ll be right at home with it. On the other hand, the UC version is smooth and sexy but getting to be as expensive as a 35/2 Summicron ASPH, which is actually a better lens.
People understand why tilt lenses exist – making super-expensive Canon DSLRs produce pictures that look like they were taken with a toy camera (or making the subjects themselves look like toys). No one knows, though, why shift lenses were once a thing. It’s all a matter of perspective.
The truth, from a certain point of view
Photography always has (and always will) present this problem: needing to fit a large object into a frame that is constrained by lens focal length. Conceivably, with a superwide lens you could, but then you end up with a lot of extra dead space in the frame. Which defeats the purpose of using large film or sensors.
If you want to get the whole thing in frame with the minimum number of steps or expenditure of time and money, your choices are to use a really wide-angle lens, tilt a camera with a more moderate wide-angle up, learn to fly. All of these are sub-optimal. First, the really wide-angle lens is great in that you can capture the top of the object without tilting the camera. The problem is that making an engaging photo with a wideangle is actually extremely difficult – because it tends to shrink everything. Depending on how the sun is, it also stands a better chance of capturing the photographer’s shadow. Second, tilting up a camera with a more moderate wide-angle lens “up” turns rectangular buildings into trapezoids, which works for some pictures but definitely not others. Finally, learning to fly is difficult. But watch enough Pink Floyd concert films, toke up with the ghost of Tom Petty, or study Keith Moon’s hotel swims, and you might.
Do you skew too?
Assuming you are reasonably competent, you can correct perspective using software, by skewing the canvas. This is a take on the old practice of tilting the paper easel with an enlarger. This was a limited-use technique, generally practiced by people who could not use view cameras and tripods but still had to come up with a presentable representation of a tall object. There were (and substantially still are) three issues here: crop, depth of focus, and dis-proportion. First, the crop came from the fact that tilting an easel meant that the projected image was trapezoidal and not rectangular, meaning that from the get-go, it had to be enlarged until the paper was filed. This still happens with digital. Second, the depth of focus issue is related to the fact that enlarging lenses are designed to project to a surface that is a uniform distance from the enlarger (i.e., projecting one flat field onto another). You would have to stop down the lens severely, or use a bigger focal length, which in turn required a taller enlarger column to maintain the same magnification.
The digitization of perspective correction uses computation to project the flat image onto a skewed plane, using interpolation and unsharp masking. This solves the apparent sharpness issue, but it degrades quality. Finally, dis-proportion comes from the fact that straightening converging verticals starts from a place where certain details are already compressed via the original perspective. For example, looking up at a tall building from a short distance, the windows look shorter (top to bottom) than they would if you were looking straight at the window from its own level.
So even when you manage to re-skew the canvas/field/whatever, you now have an image that is too “fat.” On enlarging paper, you would be forced to make a cylindrical correction to the negative (which is not practical in real life). On digital, there are specific transformations that you can perform to correct (for example, the adjustable ratios on DxO Perspective and Lightroom.
So skewing is a useful technique, but it’s still better to skew less.
Shifting your thinking: the mirror years
View cameras have used the concept of shift and tilt to adjust for situations where the viewpoint was wrong (shift) or depth of field was insufficient (tilt). Raising the front standard of a bellows-type plate camera was always standard practice to improve photographs of tall objects, especially in an era where wideangle lenses were not super-wide by today’s standards. Lens board movements were easy to achieve because there was always some distance between the lens mount and film plane in which to insert a mechanism to raise the lens relative to the film. And because there is no control linkage between the lens/shutter and the rest of the camera, you’re not losing automation. You never had any!
But these cameras were not small. The smallest bellows-type camera with lens movement features was the Graflex Century Graphic, a delightful 6×9 press-style camera. On many bellows-type cameras, though, there was no real provision for using a shifting viewfinder. The press-style cameras had wire-frame finders that provided a rough guide, but nothing could tell you whether the lens was actually level outside a gridded ground glass. Later in the game, the Silvestri H would present as the first camera with automatic finder shift, as well as a visible bubble level. Linhof used a permanently-shifted lens assembly (and viewfinder) on the Technorama PC series, and Horseman provided shifted viewfinder masks for the SW612P, though these were available only as “all the way up/down” or “all the way left/right.”
The shift mechanism, though, could not be adapted to SLRs easily due to three constraints:
- Most SLRs lenses are retrofocal – meaning that the nodal point of the lens is more than the stated focal length from the imaging plane. It takes a ton of retrofocus to insert a shift mechanism into an interchangeable lens that has to focus past a mirror box. More retrofocus means bigger lenses So when perspective control lenses began to appear for SLRs (35mm and 6×6), they were huge. Maybe not huge by today’s standards, but a 72mm filter size is pretty big for a Nikon SLR whose normal filter size is 52mm.
- To achieve an image circle large enough to allow shift around what is normally a 24x36mm image circle, it is necessary to use a wide field lens and stop it down severely (illumination with almost any lens becomes more uniform as it is stopped down).
- Most cameras can only meter PC lenses correctly in their center position, wide-open. Where shift mechanisms eliminate direct aperture linkages to the camera, you’re back to the 1950s in metering and focusing – then shifting – then manually stopping down to shoot (now corrected by the use of electronic aperture units in $2K plus modern Nikon and Canon PC lenses).
Viewing is not a lot of fun with 35mm SLRs; when stopped down, PC lenses black out focusing aids (like split prisms and microprisms) and still require careful framing to keep parallel lines parallel. So you need a bright screen – plus a grid or electronic level. Suffice it to say, a lot of people regard perspective control to be a deliberative, on-tripod exercise when it comes to SLRs and DSLRs. Maybe it’s not.
A new perspective: full frame mirrorless?
So here come mirrorless cameras (well, they came a while ago). Now you can fit any lens ever made to any mirrorless body. The optical results may vary, but at least physically, they fit.
— Getting the lens in place
So I grabbed the nearest available PC lens I could find, which was a 28/3.5 PC-Nikkor. Not AI, not even from this century. Released in 1980, it is a beast. I plugged this into a Konica AR body to Nikon lens adapter, and from there into a Imagist Konica lens to Leica body adapter. Why all these kludgy adapters? The answer is actually pretty simple: the Imagist has the correct tolerance to make infinity infinity, and the Konica adapter does the same. This is not a small consideration where you might be zone focusing a lens.
Then I plugged this kludgefest into a Leica M typ 246 (the Monochrom). Because why not start with the OG of mirrorless camera platforms? Of course, you can’t use a rangefinder with a Nikon SLR lens, so I plugged in an Olympus EVF-2 (which is the ‘generic’ version of the Leica EVF-2.
— Getting it to work
The Nikkor has two aperture rings. One is the preset, where you set your target aperture. The other is the open/close ring, which goes from wide-open to where the preset ring is set.
I turned on focus peaking and set the preset for f/22 and the open/close for f/3.5. I was able to establish that infinity was correct.
Next, I stopped down the lens (both rings to f/22), expecting that just as on an SLR, the EVF would black out. Worked perfectly.
I hit the “info” button to get the digital level, and it was off to the races. The lens has a rotation and a shift.
— But how well does it actually work?
The functionality is actually surprisingly good. On a Leica, it’s just stick the camera in A, stop the lens down to f/16 and 22, and point and shoot.
The digital level obviates the use of a tripod or a grid focusing screen, and you really just frame, turn the shift knob until the perspective looks right, and there you go. There are a couple of limits
You can’t use maximum shift along the long side of the film, but the only penalty is a little bit of a tiny shadow in the corner. And that’s with a full-thickness 72mm B+W contrast filter. You get 11mm shift up and down (i.e., along the short dimension of the firm) and 8mm left and right (nominally; as I stated, you can get away with more under some circumstances).
Aside from that, there are some minor annoyances like making sure you haven’t knocked the aperture ring off the shooting aperture. Or knocking the focus out of position (it’s a very short throw…).
BUT THE DUST! And here is the rub – shooting at f/16 and f/22 brings out every dust spot on your lens. Normally, you would shoot a Leica M at f/5.6, f/8 max. But PC lenses – like their medium and large format cousins – are designed to max out their frame coverage at very small openings. So I had never cleaned the sensor on my M246 in four years, and I got to spend an evening working on a hateful task that included swabs and ethanol and bulbs and the Ricoh orange lollipop sensor cleaner.
— And how sharp?
Very. Diffraction is supposed to start becoming visible at f/11 on this combination at 1:1, with it showing up in prints at f/22.
Pictures stand up to the old 1:1 test, except in the corners where you have over-shifted along the long side. Recall that in lot of situations, two of the last bits of corner are usually sky, where a tiny amount of blur is not going to be of any moment.
How well this will work on a color-capable camera is a question, especially since lateral color would come out. But right now, this is posing the most acute threat to 6×4.5 cameras loaded with TMY.
Well, you have that day where you feel like you want to step off the film train. Oddly enough, it was not because some digital sensor came along with massive resolution, or film hit $8 a roll, or the EU outlawed developing chemicals. Or you name the calamity.
Here, it was the product of well-meaning backward-compatibility. I had this thought as I was looking at a roll of TMY shot with a Silvestri H that probably cost $10,000 new. It uses standard-style roll backs made by Mamiya that are bulletproof and have nicely spaced frames. The pictures themselves were sharp, undistorted, and perspective-corrected. But they were ruined for optical printing because backing paper numbers – useful only to people with red-window cameras – transferred onto the emulsion. I felt like Constantine the Great, kinda. I looked in the sky, and the sign of “Kodak 14” was shining down on me. In this sign you will [be] conquere[d].
Browniegate (let’s give it a good name, at least) occurred because Kodak had an issue with backing paper on 120 film (this affected some lots made between 2-4 years ago). Environmental conditions could cause backing paper frame numbers to transfer onto the emulsion of the film and show up in low-density areas, especially the sky. Lomographers probably loved this. Everyone else, not so much.
Kodak handled this reasonably well (but not optimally),* and it has been very good about replacing defective film. Given that they had few choices for backing paper (1-2 suppliers of this worldwide) and that they probably couldn’t anticipate the full range of environmental abuse film might experience in storage, I cut them some slack. We all accept that any time we use film, we could end up with no pictures. Grab the fix instead of the developer. Leave a rear lens cap on. We’ve all been there. But the backing paper thing is not within user control. Unlike the bad roll of film that comes up every hundred thousand rolls of film, the frame number thing hits more often. It’s not like lightning. It’s more like a tornado ripping through farm country.
The what is one thing. But the why is another. Laying aside bad material choices by the backing paper manufacturer, the underlying issue is that frame numbers on paper backing were last needed for serious cameras in the 1950s (the Super Ikonta C may be the last one), and the ruby-window method of seeing what frame you are on persists mainly in (1) Brownie cameras whose design goes back to 1895; (2) Lomography-oriented products; and (3) current large-format roll holders that should know better. There is actually no excuse for this last category, since there is no patent for frame counters that is still valid, and roll backs are only made in LCCs now. It’s the support of these older and cheaper cameras that requires frame numbers past #1 – and in a weird way, the shadow of the 19th century is still causing problems in the 21st.
The bigger question this begs is this: if backward compatibility is a significant part of the business case for 120, does that mean that when the ruby-window market fizzles out, it will take serious medium-format photography with it? Best not to think about that.
*By not optimally, it would be nice to have a new catalogue number for new backing paper, so that people trying to buy film from B&H for critical use would not get stuck with old product – like I did when I was going to Singapore, bought 20 rolls of TMY in March 2019, got 158xxx TMY, and had backing number transfers on every roll of film, with up to 75% of 6×4.5 frames being affected on any given roll. Or maybe use a laminated paper that has punched-out numbers and not printed ones.
Mark my words (as if they are that important): the future will not look kindly on the gimmick-bokeh that dominates the aesthetic of 2000s photography, just as we get a chuckle out of 1970s pictures with excessive sunsets, lens flare, and nipples. People yet to be born will wonder why photographers in the 2000s took insanely expensive lenses, better than any ever designed to date – and cheaper – and then used them to simulate astigmatism, near-sightedness, and macular degeneration. The most charitable explanation will be that photographers were trying to show solidarity with the visually impaired.
The buzzword (today) is subject isolation. But why are we isolating a subject from its context? What’s wrong with the context? Are we creating millions of pictures of the same peoples’ faces with nothing else in the shot? Are they people or products?
In the present, good composition can still be shot at f/16. Small apertures are also obligatory on larger-format film cameras because a lot of those lenses have serious light and sharpness falloff at the edges at their maximum apertures, especially with the focus at infinity. Nobody buys a $3,000+ 6×12 camera to get the types of pictures you could see from a $250 Lomo Belair.
There is a reason that early autoexposure SLRs used shutter priority: if you had to make a choice for what would be in focus, it would be your subject; if you had light to spare, you’d want use as small an aperture as your lowest desired shutter speed would support. And that thinking underpins historic picture-making. Intentionally shallow depth of field is not a feature of most of the world’s most iconic images. Arnold Newman did not need shallow depth of field to shoot Stravinsky. Eugene Smith did not shoot Spanish policemen as an exercise in subject isolation. And David Douglas Duncan captured every crease in the face of an exasperated Marine captain. How about Richard Avedon with his Rollei and every celebrity on earth? There are exceptions, but throughout history, wide apertures were primarily driven by a need to keep shutter speeds high enough to avoid blur. Light constraints are not such a consideration when ISO 6400 is a thing on digital cameras.
The worst part about bokeh, and the one no one talks about, is that it can actually be unpleasant by causing eyestrain (or maybe brain-strain). In many ways, a human eye – if you looked at the whole image projected on the retina at once – resembles a cheap Lomo-type lens: sharp in the middle (the fovea) and blurry at the edges. It even has a complete blind spot (the punctum caecum). The eye has a slow aperture, estimated by some to be f/2.8. But, dammit, everything looks like it is in focus. That’s because your eyes are continuously focusing on whatever you are looking at. Your brain is continuously piecing together fragmentary information (the blind spot thing is incredible – vertebrate biology beat Adobe to content-aware fill by about 500 million years). The end result is what looks (perceptually) like a scene where everywhere you look, things are in focus. It’s actually pretty amazing that this works.
In every photo, there is a compression of three dimensions into two. More depth of field allows your eyes to wander and allows you to process the scene fairly normally. When you look at bokehlicious pictures, definition is concentrated on one object (and often just a piece of it). You might find your eyes (or visual perception) constantly trying to focus on other aspects of the scene besides the subject. But neither your eyes nor computational photography can remove extreme artifacts once they are “flattened.”
Scroll back up to the picture at the top. Same composition, shot at f/8 and f/1.5 with a 50mm ZM Sonnar. Look left and look right. On the left, you can look almost anywhere n the scene and see whatever visual element you want to scrutinize, at at least some level of detail. On the right, you are always and forever staring into the Contractor Ring®. You can try to focus on other elements of the picture on the right, but the information simply is not there. Need an aspirin?
And it can be fatiguing, more so that the aesthetic is played out and that anyone with an iPhone X can play the game. Pictures with ultra-shallow DOF don’t look natural. They are great every once in a while, or if you need a 75/1.4 Summilux to get an otherwise-impossible shot, but otherwise, get off your ass and move the camera (or your subject) into a position with a reasonable background.
# # # # #
So the usual has happened. You have a pile of undeveloped film. Maybe you didn’t note the processing (N, N+1, N+2) or maybe it’s bulk loaded film that has no label on the cassette (for example, you might find it very easy to confuse Ilford Pan F Plus 50 with Ultrafine Xtreme 400). Or you can’t remember what order you shot film. Of course, the difficulty is that unless you somehow identify the film canisters, you’ll mix things up. And even then, once film is out of the canister and developed, there is rarely a persistent indicator of what happened. Data backs for 35mm cameras are something of a pain, they don’t record everything, and almost all of them are going extinct in 2018. Buy a Nikon F6 that records exif data? It’s a little late in the game for that.
The solution: the $5 arts & crafts hole punch and a $5 film-leader puller
One perhaps non-obvious solution is to permanently mark the film leader. You obviously can’t do this with a pen because the writeable part of the film will get washed off in processing.
The most effective way I have found to achieve this is with craft hole punches, which come in various hole sizes (1/16, 1/8, and 1/4″ – 1.5mm, 3mm, or 6mm), as well as a variety of shapes (round, hearts, stars, diamonds). As long as you make the marks on a part of the leader that will not be discarded (so not the long thin tongue part on commercially loaded film), these will survive the development process and won’t go anywhere until you snip them off. The uses are numerous:
— Bulk-loaded film: If you punch the leaders with a distinctive mark, you can avoid mistaking one type of film for another. For example, where it is very easy to confuse bulk-loaded Ultrafine Xtreme 400 and Ilford Pan F Plus, punching the Ultrafine with a heart will help you avoid mixing things up when loading your camera.
— Processing regime: If you are going to push-process film, punching the leader with a mark (such as a star) either before or after exposure will help prevent you from mixing up your N, N+1, and N+2 films. If you need to, you can use a leader-retriever to pull the leader out and mark it after fully rewinding.
— Order the film is shot: If you can’t imprint the first frame of a roll with a data back, you can use a number of punches to signify the order in which the roll is shot. You can even do this before you shoot the film.
— Camera or lens used: no data back records focal length, and camera bodies of the same make – assuming they even have a film-gate cutout for identification – use the same cutout (for example, Konica bodies usually have a triangle notched into the edge of each frame).
# # # # #
My first DxO One (version 1, $125 new on clearance) bricked when I upgraded the firmware. Left with an inert toy while Amazon dug up another one to send me, I could not help but play with the dead one. I flew it up to the water/ice dispenser on the refrigerator. “Open the pod bay doors, HAL.” Nothing. The DxO One rotated 180 degrees so that it could eject the micro SD card into the…
“Dad, what are you doing?”
But seriously, the DxO One is one of strangest and most wonderful cameras to come out of France, or anywhere. Here’s why.
Sensor. The camera uses a 20Mp, 1″ Backside Illuminated (BSI) sensor (3x or so crop factor) made by Sony, the same one as on the RX100III. Two things make this a standout here: first, BSI sensors are quite good – meaning this returns results almost on par with the Sony a6300’s copper-wire conventional sensor. Second, almost all sensors perform equally at base ISO. In the software design, DxO biases the camera toward lower ISOs and wider apertures (which makes sense, since a 1″ sensor starts diffracting at f/5.6).
How does this compare to an iPhone XS sensor? Well, it’s almost 70% more resolution and 6.7x times the surface area (116mm² x 17.30mm²). Do the math. All the computations in the Apple world can’t make up for this type of difference in displacement. This does expose the genius of portrait mode, though – because not even a 1″ sensor is big enough to have easy-to-achieve subject isolation.
The sensor is used for contrast-detect AF (with face priority).
Lens. 32mm equivalent, f/1.8-11 aperture, six groups, six elements, with some of the weirdest aspherical shapes imaginable. It’s very tough to find a lens on a compact camera that approximates a 35/1.8. But here you are.
Far from being telecentric with an expected “folded optics” path, the DxO One uses the cellphone method with almost zero distance between the rearmost element and the sensor. The rearmost element looks like a brassiere. Like this:
The lens is happiest at larger apertures (f/2-f/4).
Storage. The DxO One accepts standard MicroSD cards. I was able to test up to 128Gb cards (Samsung EVO Plus), and it is able to read and write to them with no issues.
Power. Power comes from an internal battery but can also be fed directly from a micro USB cable. The battery takes about two hours to charge and does about 200 shots. Version 2 of the camera has a removable back door to accommodate an external battery pack DxO no longer sells. You also lose the free software (see below).
Viewfinder. Your choice of two. You can plug the camera into your iPhone, where you can use the DxO One application and the phone screen as a viewfinder. Alternatively, version 3.3 of the camera firmware turns the little OLED screen on the back into a square contour viewfinder, good enough at least to frame the middle square of the picture – and surprisingly good at estimating a level angle for the camera. You could also split the difference with a Lightning extension cord.
Connectivity. The camera was originally designed to connect via the Lightning port, but DxO enabled the onboard WiFi so that now you can use the application on the phone and control the camera (including view-finding) without a physical connection. The DxO One can also connect to your phone via your home wireless network. WiFi operation – no matter what the camera or phone – is not as much fun as it first sounds – which is why the DxO product is more flexible than Sony’s wireless-only solutions.
Software. In terms of the camera’s software, all the magic is under the hood. The camera switches on by sliding open the front cover (slide it all the way, and the Lightning connector will erect itself). There is a two-stage shutter button on the top and you can swipe up and down on the OLED to switch between controls and viewfinder and left and right to toggle photo and video. The camera stays on the exposure mode last selected from the DxO software on the iPhone.
The DxO One phone app is well-done and responsive. You can use it to frame, shoot the picture, and control what you want. Features include:
- JPG, Raw, and Super Raw (stacked) exposure modes.
- Single-shot, timer, and time-lapse settings
- Flash settings
- Subject modes and the usual PSAM modes.
- Program shift (between equivalent exposures with different shutter speeds or apertures).
- Single AF, Continuous AF, On-Demand AF, and Manual focus (manual includes an automatic hyperfocal calculation if desired).
- Matrix, centerweighted, or spot metering.
- Grid compositional overlay.
- “Lighting,” which is like a mini HDR compressor for JPGs.
You can also look through the exposures on the camera/card and move them to your phone as desired. As noted above, though, you do need to initiate wireless connections with the camera connected.
If you get a version 1 camera, new, it also comes with DxO Optics Pro 10 Elite (now Photo Lab 1 Elite) and DxO Filmpack Elite. But you have to be able to document that you are the original owner of the camera. Both of these can run as standalones or can be external editors for Lightroom. Photo Lab 1 is also capable of replacing Lightroom.
If you get version 2, you’re out of luck. But you do get a 4gb SD card and the detachable back door for that battery pack.
And either way, you do get DxO OpticsPro 10 for DxO One, which gives you a nice imaging/digital asset manager that can composite SuperRaw files. SuperRaw is a stack of four successive (and extremely rapid) exposures that cancel out high ISO noise.
And if you don’t like any of that, the DxO One outputs normal DNG files that you can simply edit to taste in Lightroom. There is a Lightroom profile for the camera’s minimal residual distortion.
Ergonomics. This is the one place where things are sketchy. It’s hard to hold onto a small ovoid object, especially one with a button on the top. I would highly recommend a wrist strap.
Upshot. Maybe not the most compelling camera at $700 plus when it came out, but now that it is a sixth of that and still a lot of fun to shoot, go for it!
Can you believe that Pullman is used for “bus” in parts of Europe? Jeez, I thought that a pullman was inherently a rail vehicle. How dare usages change! Somebody get on the Rail Transport User’s Group (RTUG) and post a philosophy question. We need to take the name Pullman back!
But really, how many hours of the waning days of old men’s lives have been wasted arguing about whether newfangled cameras grabbing electrons can be “photography” as an art or a craft? How many should? Would that time be better spent arguing about cars, finishing, guns, boats, or wristwatches?
You can spin off into the etymological argument: electrons aren’t photo + graphy because the light is not making the image directly. Or there is transformation. Or something. Reliance on ancient Greek is misguided. Photography was a neologism invented in the 19th century. It was not true to the ancient Greek then (no thing was – or is – drawing or scratching in the sense of γραφή); the 19th-century term was just an arbitrary description for what happens when light was the prime mover in the imaging process. And we have legions of words whose meanings have deviated far from what they would have meant to Greeks or Romans – or even what they meant the first time the terms were coined. Hence the weird crossover between autobuses and rail cars.
Is photography art? If you believe that, look at what the art world says. It’s all photography. That’s what museums call anything that is an image captured by a machine (film or digital) where the substantive content originates in the original image recording process. The only distinction made (and only sometimes) is for pre-silver-halide work, and even then only if it is one of the more obviously exotic (Daguerrotypes, Cyanotypes, and other things that deviate from the look of optical or inkjet paper). Odd that they don’t care what system originated the image; they only care about the medium it is ultimately expressed. Just like other things you see on the walls are “oil,” “watercolor,” “pastel,” “drawing,” etc. A “dye diffusion print” does not differentiate between originating on negatives or a Handycam.
Or maybe it’s not odd. Art requires a visible or tangible expression, and in the end, that is all that counts.
The Nikon Z7 is undoubtedly a quantum leap in Nikon’s camera evolution, essentially putting the best features of the Dxx series into a mirrorless body. Yet there is the inevitable complaint: “No dual card slot? Only one? No pro camera is like that!”
Pardon me, but plenty of pro cameras have been like that – and not just pro digital cameras in some benighted past (n.b., an era ending maybe 4 years ago). Consider the D2x and D700. Anyone want to call those “not pro” cameras? How about the flagships of the EOS fleet for a stretch?
In an era where film ruled the waves, it’s not like you could put two films into the same camera simultaneously for “backup.” And back then, pictures were scarcer and more valuable, and your chances of losing a shot due to a light leak, film defect, or development failure were astronomically high compared to anything that could befall a digital outfit.
So let’s move to digital. What is the measured malfunction rate of properly kept, brand-named CF, SD, or XQD cards? Hint: it’s astronomically low compared to the failure rate of the cameras that use them (SanDisk posts an MTBF of 1 million hours, or 114 years). Here are things that are far more likely to happen:
- Dying (which is all but guaranteed within the MTBF cited)
- Being killed in a car crash
- Being hit by lightning
- Finding a lost cousin on some genealogy site
- Winning Powerball
The threat of a bad flash card bringing down the system is simply not a real thing for most people. Dropping a camera, having a battery burn out, or suffering some physical mishap is far more likely. Even being in a car accident is more likely. And for that matter, why wouldn’t “any responsible pro” bring an extra car? An extra photographer?
I suspect that many of the people complaining about this issue — if not simply fronting to front — are semi-pros who scraped up every last dime to buy one really good camera to shoot wedding pictures. Fair enough. Maybe they had a bad experience with a counterfeit card once. Abused a good one. Ran one into the ground. It’s also possible to screw up the file system of a card by failing to respect buffers that are still clearing or repeatedly using without ever doing an in-camera format.
But this group is not positioned to speak for all pros (i.e., make the statement that “no pro would…”). Real pros in every field use redundancy – and it’s not limited to using two cards in the same camera (which does nothing if your camera is the single point of failure). Redundancy could include:
- Using smaller cards to reduce the “all eggs in one basket” effect. 32Gb is fine. Smaller media is one of the reasons that film was safe; 36 frames on a roll of film is small.
- Rotating between cards over the course of the shoot (the nice thing about EXIF is that Lightroom can combine shots from multiple cards into exactly the right order).
- Using two cameras and two cards, which means you will never be high and dry.
- Beaming your images in real time using wireless (a Toshiba Flashair is great for this, though there is no XQD version yet).
- Downloading one card to your laptop while shooting a second card.
When you consider the other options, thinking that two cards in a camera would get you off the hook seems a little odd, does it not?
Maybe the whole “multiple card slot” thing is a product of general societal economic insecurity. Or a “mine is bigger than yours” mindset. But any way you slice it, it doesn’t seem to make a lot of sense for most people.
There must have once been an awkward moment when Homo sapiens neanderthalensis saw a gangly baby Homo sapiens sapiens and wondered, for the first time, what the future would be like. The Neanderthals basically merged into the surviving human line (or were eaten — the explanation seems to vary now) — and essentially disappeared. But not before giving Europeans those nettlesome brow ridges and occipital buns.
Neanderthal shock happened sooner in the Canon world than it did for Nikon. Canon released its last mainline* manual-focus camera (the T90) in 1986. Canon did not then engage in a merging of genes but instead a lens-mount genocide. FD lenses faded fast as EOS came to rule the jungle. Nikon took a few more years to get there in 1990 with its last manual focus camera, though that camera lingered for five years on the market — and Nikon never really gave up on the F-mount. Well, not immediately. Like Neanderthals, some degree of interbreeding was available, but all that fur began to repel people after a while. All of this was 23 years ago now.
By the way, when the last newly designed Nikon MF SLR went out of production, this was dominating the disco:
Nikon would in 2001 release the FM3a, but like the contemporaneous Beatles 1 album. It was just a rehashed FE2 with a new shutter. And that was so long ago that kids born then are old enough to vote. If you were an adult excited about the release of the FM3a, you’ve probably just passed out of the “18-35” demographic, if not past the “uncool 44” milestone. But don’t worry – Nikon has your back with retro-rerun cameras like that, the S3 and SP. Because it’s more fun to reminisce with cameras that were shiny and new (the first time) before you were born.
* By mainline, I mean serious and mass-produced. Yes, Canon made a craptastic T60 and Nikon made (or branded…) the FM10, but these were cameras for developing markets or students.
Detour into how Nikon’s product strategy: so many cameras
It would not be a Machine Planet article without a detour into some kind of editorial, and here is one: digital cameras did not usher in the age of meaningless upgrades and gimmicks designed to excite camera buyers into “one more body.” Film SLRs were the greatest feature-chase of them all: the lenses and the film are the ultimate determinants of performance on a film camera; everything else is metering, motor, and in some cases autofocus.
Consider that in 1980-1985, Nikon fielded five prosumer cameras based on the same platform (FM, FE, FM2, FE2, and FA), at the same time it fielded three based on an intermediate architecture (EM, FG, and FG-20), and a next-generation intermediate (N2000/F-501). All of these variations revolve around binary features/exclusions: needle meter or not; matrix metering or not; internal motor or not; program mode or not. And you thought Sony had a short attention span?
To be fair (why start now?!), by the sunset of Nikon’s manual focus cameras in 1995, post-processing was out of the reach of most people. Photoshop was at version 3 and barely able to handle the tasks it routinely handles today (it also fit on 5 Mac floppies…); scanners were insanely expensive; and if you had a bad slide, you were out of luck. If you had a bad negative, you were mostly at the mercy of Candice at Fox Photo to maybe run that one neg through the Fujitsu at N-N-N-3 instead of N-N-N-N (this person actually existed, was roughly my age, and was quite cute).
Even when Nikon made the jump to autofocus, this proliferation continued, with performance carefully meted out between models that used the same AF module (consider that the N50, N70, N4004, N5005, N6006, N8008/s, and F4 used the same module – with outcomes so different, you have to wonder what they were holding back.
But what was going on with the lenses?
Nikon’s lenses had a more tortured history that got off to its first wrong turn when Nikon started releasing metered prisms. That would have been the time to revise the mount to include aperture information (relative and maximum). Almost the entire subsequent drama of Nikon lenses was a product of trying to fix that: prongs, AI, AI-s, CPUs. When the Photomic metered prism came out in 1962, Nikon already knew that it was enough of a market force that it could have moved to a meter coupling in the body without losing its user base. For six long years, Nikon’s meter prisms required the user to set the maximum aperture of the lens on the meter, manually.
Actually, that didn’t just stop in six years. In 1968, Nikon introduced the FTn finder, with its semi-automatic indexing: mount the lens; turn the ring right, turn the ring left, done. The kludginess of this solution was only more glaring when companies like Konica were releasing lenses that could transmit maximum aperture information with a pin on the back of the lens (as opposed to a poky thing screwed onto its aperture ring) and using irises that were consistently linear, so as to allow automatic control of the iris. Granted, shutter priority did not predominate as a single-factor autoexposure method, but the point was that Nikon was well behind the curve. By 1971, Canon’s pro bodies had moved the meter cell to inside the body and were transmitting relative aperture position invisibly.
Nikon’s Aperture-Indexing (AI) lenses did away in 1977 with the prong, song, and dance because they fit cameras that only needed to know how many stops the selected aperture was away from wide-open. If anyone knew what the max aperture of the lens was, it was the user – not the camera. AI was in a way a step backward from the FTN, since it was only a system for transmitting relative apertures. And AI-only bodies turned out to be the full-employment act for repair people and machinists – because mounting an old lens on an AI body, absent modifications to the lens, the mildest of which was a new aperture control ring, would cause damage. AI ushered in a tiny doubled aperture scale, the Aperture Direct Readout (ADR) that some cameras could display in their viewfinders via a wedge prism, like the F2AS, F3, FA, F4, and F5.
The next iteration, AI-s (1981) brought Nikon almost up to date. It finally added a maximum aperture indexing pin to lenses (as well as a pin that transmitted the focal length to the camera. The only camera to fully implement this scheme was the FA, for its program and shutter-priority modes. There were three implementations of AI-s:
- The FA used the full AI-s protocol for AI-s lenses, going open loop when shooting AI-s lenses (because it knew the maximum aperture, focal length range, and stop-down rate) and selected a program based on focal length. It went closed-loop when shooting AI and AI-converted lenses. By “closed loop,” I mean the camera reads the scene, stops down, takes another reading, and finally fires.
- The FG and its replacement the N2000/F-301 all used a similar open/closed-loop setup, except these cameras could not read the focal length via the pin and thus only used one program (or one selected by the user)
- The N2020/F-501 would act like an N2000/F-501, but it could switch to P-Hi from P-Auto when a CPU-equipped lens with a longer focal length was mounted.
Of course, with closed-loop exposure, the only value of AI-s is purely informational; the FA and FG/N2000 systems don’t really need to know maximum aperture to work. And when it comes to “Program” operation for AI lenses, is it really programmed in the sense of a neat little graph – or is it shutter speeds programmed against apertures stopped down against the maximum?
A tale of two cameras
Nikon’s technological peak came with the FA, pretty much the most sophisticated camera anyone had ever seen. Four (count ’em!) exposure modes – Program, Aperture, Shutter, and Manual, all powered by two MS-76 cells. Matrix metering with any native AI lens. Program shooting with any AI-s lens. LCD display in the viewfinder. And… it wasn’t quite ready for prime-time, developing a reputation for having flaky electronics and poor matrix metering. Or so people say.
In 1990, the successor to the FA, the N6000, hit the scene. The N6000 kept most of the FA feature set but swapped in some new features. Incoming ones included:
- A 2 fps internal motor drive to replace the bulky MD-15
- Auto film loading
- Power film rewinding
- Auto bracketing
- Slow and rear-curtain flash
- DX code reading
- Automatic balanced fill flash
- An “analog” (graphic) over/under-exposure display that pops up in manual mode
- Exposure mode indicator in the viewfinder
You could argue that the N8008 was the successor to the “technocamera” FA, but the N8008 was an autofocus camera. Or you might have argued the F4, which is a cross between an F3, an MD-12, and an FA. The departures with the N6000 were somewhat less notable:
- Elimination of interchangeable focusing screens (which were apparently not a popular feature of the FA)
- A new reliance on CPU lenses (AF and AI-P), which allowed the correct aperture to show in the viewfinder without an ADR display
- Loss of program mode for AI-s lenses (due to CPU dependency)
- Loss of matrix metering for AI-s lenses (same)
- Loss of a mechanical shutter speed
- Loss of 1/4000 sec on the shutter
- Change from MS-76 button cells to the somewhat less common CR223A/CR-P2.
But for all intents and purposes, this was “it.” Although Nikon continued to sell (not make) the F3 into the mid-2000s, the only newish manual purpose-built manual focus design was the FM3a, which is little more functionally than an FE2 with a shutter that could also be governed mechanically. It also followed a six-year period in which the N6000 was off the market.
On Earth-399, Nikon made manual focus cameras from 1959 to 2270. But that is also the universe in which “George Washington freed the slaves… Abraham Lincoln was regarded as the father of his country… and George Custer became president of the Indian Federation.” (“Superman… you’re DEAD… DEAD… DEAD,” 1971).
First in/last in (F3AF/F3)
Nikon had always managed to be both early and late to the AF party. The Nikon F3AF emerged in 1983, just three years into the F3 era. In fact, it came onto the scene at the same time the DE-3 High Eyepoint finder came out (this is the thing that makes the F3 into the F3HP, the most popular variant). The F3AF was the first camera to use electronic contacts to control lens focus, using a contact system that is eerily similar to current Nikon lenses – but with a motor-in-the-lens implementation that most people came to associate with Canon. The manual focus version of the F3 proved wildly more popular and became one of the longest-running Nikons in history, with a 20-year run. That is catalog time, not necessarily production time. When it was time for the F4, Nikon was playing catchup with Minolta and Canon on AF, whose amateur cameras were upping the stakes.
The forgotten Nikons (N2020/N2000)
In 1984-1985, just after the F3AF, Nikon made another pair of cameras, one with AF and one without. These were the N2020 (F-501) and its value-engineered little brother, the manual-focus N2000 (F-301). These were essentially a motorized version of the FG. According to lore, the N2000 was a last-minute decision from the accountants. That’s believable since it allowed the company to drop the FG and make two cameras on a common set of tooling. But it cannot actually be true, because the N2000 was the first of the two cameras to be released – and by a year.
Rather than the interchangeable screens of the N2020 (B/E/J), the N2000 had a fixed K screen (split prism plus micro prism collar), a LED shutter speed display (but no AF indications), and no automatic selection between programs (on the FA, this had required a post on AI-s lenses; on the N2020, it required a CPU to tell the camera the focal length). Common to both cameras, though, was a traditional control layout, a coreless drive motor for film advance, auto-loading, an exposure compensation dial, DX coding, plus pretty much everything the FG had – save the +1.5EV backlight button (the N2000/N2020 had an AE lock button that served much the same purpose). One mystery is why the N2020 was typically sold with an AAA battery holder rather than the N2000’s AA – since it is fairly obvious that the battery chamber was designed around AA. The smaller batteries required a special inset tray. But on the plus side, they do shave some height and weight off the assembled body. And the N2000/2020 is a pretty heavy body.
The N2000 is a camera with a level of elegance that we forget about: a large, bright, spartan viewfinder, a normal control layout, and a certain fluidity of shooting. Motor drives can be very important if you are left-eye dominant. Plus normal batteries that you can buy anywhere. Plus it has nice, sharp edges. It’s just not a camera that has the simulated chrome that is so popular with “the kids today.” And yes, by simulated, meaning that pretty much every “chrome” camera post-1980s has plastic covers.
But what about the N6006/N6000?!
The N6006 is something of a hidden gem in the Nikon line; it has most of the things you like about the N8008 (sans 1/8000 top speed, AA batteries, and high-eyepoint finder) in a smaller package. It is actually pleasant to shoot, though it does carry the stigma of using 223 lithium batteries. That might have actually made a difference a few years ago, when you could walk into a drug store and buy CR123As and 2CR5s, but today, all lithium batteries are more Amazon than the corner store.
The N6006 is one of many Nikons that share the AM200 AF sensor array (the others being the N4004/F-401,N5005/F-401s, N8008/N8008s/F-801/F-801s, and the F4. As you might have surmised from the AF performance differences in these bodies, CPU speed and motor torque are huge determinants of speed. The F4 is tops in both CPU and motor power, and the N4004 has the smallest brain and smallest muscles. The N6006 and N8008 are mid-range, and the N8008 has a more powerful motor.
The little brother, the N6000, loses some functionality compared to its AF twin: no spot metering (because that comes from the AF module), no built-in flash (spite?), and a slightly smaller LCD display (that omits the AF confirmation dot, obviously…). But all the same, it is much smaller and lighter. Oddly, it still does support (or for P and S, requires) CPU lenses. As an adjunct for occasional manual focus with otherwise-AF lenses, it is fine; in fact, examples of the N6000 sell for less than the price of any manual-focus-friendly interchangeable screen for any SLR or DSLR. So I would ask, are you better off…
There is nothing such as “maximum shutter actuations.” People act as if there were some magic number. People freak out about this. The rated number is unlikely to be reached for most amateur photographers. It’s unlikely to be reached by two amateurs using a camera back to back. Maybe even three or four, unless one used the camera at the beach or somewhere gritty.
- The rating itself is the MTBF, or Mean Time Between Failures. That means that on average, Nikon’s rated shutters last 150,000 cycles. You don’t know whether that means most last to 250,000 and relatively few go 50,000 or whether all of them are somewhere around 150k.
- There is no warranty that a shutter will get to 150,000. Your two year factory warranty will expire one day, and it could be at 18,000 exposures or 180,000. Doesn’t matter. Nikon is not fixing it for you for free.
- Inside the factory warranty, Nikon does fix it for free, shutter count notwithstanding.
- Likewise, Nikon is not fixing your used camera, even its original sale was within 2 years ago, or even if the shutter failed at 8,000.
It’s all marketing.
By the way, when Nikon was coming up with its 150,000 exposure MTBF, that was 4,166 rolls of film, which was more than most people shot in their lifetime. For a pro, a new shutter (which in those days was a $250 repair) cost nothing compared to the cost $12,000 in film you shot before you got there!