Well, you have that day where you feel like you want to step off the film train. Oddly enough, it was not because some digital sensor came along with massive resolution, or film hit $8 a roll, or the EU outlawed developing chemicals. Or you name the calamity.
Here, it was the product of well-meaning backward-compatibility. I had this thought as I was looking at a roll of TMY shot with a Silvestri H that probably cost $10,000 new. It uses standard-style roll backs made by Mamiya that are bulletproof and have nicely spaced frames. The pictures themselves were sharp, undistorted, and perspective-corrected. But they were ruined for optical printing because backing paper numbers – useful only to people with red-window cameras – transferred onto the emulsion. I felt like Constantine the Great, kinda. I looked in the sky, and the sign of “Kodak 14” was shining down on me. In this sign you will [be] conquere[d].
Browniegate (let’s give it a good name, at least) occurred because Kodak had an issue with backing paper on 120 film (this affected some lots made between 2-4 years ago). Environmental conditions could cause backing paper frame numbers to transfer onto the emulsion of the film and show up in low-density areas, especially the sky. Lomographers probably loved this. Everyone else, not so much.
Kodak handled this reasonably well (but not optimally),* and it has been very good about replacing defective film. Given that they had few choices for backing paper (1-2 suppliers of this worldwide) and that they probably couldn’t anticipate the full range of environmental abuse film might experience in storage, I cut them some slack. We all accept that any time we use film, we could end up with no pictures. Grab the fix instead of the developer. Leave a rear lens cap on. We’ve all been there. But the backing paper thing is not within user control. Unlike the bad roll of film that comes up every hundred thousand rolls of film, the frame number thing hits more often. It’s not like lightning. It’s more like a tornado ripping through farm country.
The what is one thing. But the why is another. Laying aside bad material choices by the backing paper manufacturer, the underlying issue is that frame numbers on paper backing were last needed for serious cameras in the 1950s (the Super Ikonta C may be the last one), and the ruby-window method of seeing what frame you are on persists mainly in (1) Brownie cameras whose design goes back to 1895; (2) Lomography-oriented products; and (3) current large-format roll holders that should know better. There is actually no excuse for this last category, since there is no patent for frame counters that is still valid, and roll backs are only made in LCCs now. It’s the support of these older and cheaper cameras that requires frame numbers past #1 – and in a weird way, the shadow of the 19th century is still causing problems in the 21st.
The bigger question this begs is this: if backward compatibility is a significant part of the business case for 120, does that mean that when the ruby-window market fizzles out, it will take serious medium-format photography with it? Best not to think about that.
*By not optimally, it would be nice to have a new catalogue number for new backing paper, so that people trying to buy film from B&H for critical use would not get stuck with old product – like I did when I was going to Singapore, bought 20 rolls of TMY in March 2019, got 158xxx TMY, and had backing number transfers on every roll of film, with up to 75% of 6×4.5 frames being affected on any given roll. Or maybe use a laminated paper that has punched-out numbers and not printed ones.
Mark my words (as if they are that important): the future will not look kindly on the gimmick-bokeh that dominates the aesthetic of 2000s photography, just as we get a chuckle out of 1970s pictures with excessive sunsets, lens flare, and nipples. People yet to be born will wonder why photographers in the 2000s took insanely expensive lenses, better than any ever designed to date – and cheaper – and then used them to simulate astigmatism, near-sightedness, and macular degeneration. The most charitable explanation will be that photographers were trying to show solidarity with the visually impaired.
The buzzword (today) is subject isolation. But why are we isolating a subject from its context? What’s wrong with the context? Are we creating millions of pictures of the same peoples’ faces with nothing else in the shot? Are they people or products?
In the present, good composition can still be shot at f/16. Small apertures are also obligatory on larger-format film cameras because a lot of those lenses have serious light and sharpness falloff at the edges at their maximum apertures, especially with the focus at infinity. Nobody buys a $3,000+ 6×12 camera to get the types of pictures you could see from a $250 Lomo Belair.
There is a reason that early autoexposure SLRs used shutter priority: if you had to make a choice for what would be in focus, it would be your subject; if you had light to spare, you’d want use as small an aperture as your lowest desired shutter speed would support. And that thinking underpins historic picture-making. Intentionally shallow depth of field is not a feature of most of the world’s most iconic images. Arnold Newman did not need shallow depth of field to shoot Stravinsky. Eugene Smith did not shoot Spanish policemen as an exercise in subject isolation. And David Douglas Duncan captured every crease in the face of an exasperated Marine captain. How about Richard Avedon with his Rollei and every celebrity on earth? There are exceptions, but throughout history, wide apertures were primarily driven by a need to keep shutter speeds high enough to avoid blur. Light constraints are not such a consideration when ISO 6400 is a thing on digital cameras.
The worst part about bokeh, and the one no one talks about, is that it can actually be unpleasant by causing eyestrain (or maybe brain-strain). In many ways, a human eye – if you looked at the whole image projected on the retina at once – resembles a cheap Lomo-type lens: sharp in the middle (the fovea) and blurry at the edges. It even has a complete blind spot (the punctum caecum). The eye has a slow aperture, estimated by some to be f/2.8. But, dammit, everything looks like it is in focus. That’s because your eyes are continuously focusing on whatever you are looking at. Your brain is continuously piecing together fragmentary information (the blind spot thing is incredible – vertebrate biology beat Adobe to content-aware fill by about 500 million years). The end result is what looks (perceptually) like a scene where everywhere you look, things are in focus. It’s actually pretty amazing that this works.
In every photo, there is a compression of three dimensions into two. More depth of field allows your eyes to wander and allows you to process the scene fairly normally. When you look at bokehlicious pictures, definition is concentrated on one object (and often just a piece of it). You might find your eyes (or visual perception) constantly trying to focus on other aspects of the scene besides the subject. But neither your eyes nor computational photography can remove extreme artifacts once they are “flattened.”
Scroll back up to the picture at the top. Same composition, shot at f/8 and f/1.5 with a 50mm ZM Sonnar. Look left and look right. On the left, you can look almost anywhere n the scene and see whatever visual element you want to scrutinize, at at least some level of detail. On the right, you are always and forever staring into the Contractor Ring®. You can try to focus on other elements of the picture on the right, but the information simply is not there. Need an aspirin?
And it can be fatiguing, more so that the aesthetic is played out and that anyone with an iPhone X can play the game. Pictures with ultra-shallow DOF don’t look natural. They are great every once in a while, or if you need a 75/1.4 Summilux to get an otherwise-impossible shot, but otherwise, get off your ass and move the camera (or your subject) into a position with a reasonable background.
# # # # #
So the usual has happened. You have a pile of undeveloped film. Maybe you didn’t note the processing (N, N+1, N+2) or maybe it’s bulk loaded film that has no label on the cassette (for example, you might find it very easy to confuse Ilford Pan F Plus 50 with Ultrafine Xtreme 400). Or you can’t remember what order you shot film. Of course, the difficulty is that unless you somehow identify the film canisters, you’ll mix things up. And even then, once film is out of the canister and developed, there is rarely a persistent indicator of what happened. Data backs for 35mm cameras are something of a pain, they don’t record everything, and almost all of them are going extinct in 2018. Buy a Nikon F6 that records exif data? It’s a little late in the game for that.
The solution: the $5 arts & crafts hole punch and a $5 film-leader puller
One perhaps non-obvious solution is to permanently mark the film leader. You obviously can’t do this with a pen because the writeable part of the film will get washed off in processing.
The most effective way I have found to achieve this is with craft hole punches, which come in various hole sizes (1/16, 1/8, and 1/4″ – 1.5mm, 3mm, or 6mm), as well as a variety of shapes (round, hearts, stars, diamonds). As long as you make the marks on a part of the leader that will not be discarded (so not the long thin tongue part on commercially loaded film), these will survive the development process and won’t go anywhere until you snip them off. The uses are numerous:
— Bulk-loaded film: If you punch the leaders with a distinctive mark, you can avoid mistaking one type of film for another. For example, where it is very easy to confuse bulk-loaded Ultrafine Xtreme 400 and Ilford Pan F Plus, punching the Ultrafine with a heart will help you avoid mixing things up when loading your camera.
— Processing regime: If you are going to push-process film, punching the leader with a mark (such as a star) either before or after exposure will help prevent you from mixing up your N, N+1, and N+2 films. If you need to, you can use a leader-retriever to pull the leader out and mark it after fully rewinding.
— Order the film is shot: If you can’t imprint the first frame of a roll with a data back, you can use a number of punches to signify the order in which the roll is shot. You can even do this before you shoot the film.
— Camera or lens used: no data back records focal length, and camera bodies of the same make – assuming they even have a film-gate cutout for identification – use the same cutout (for example, Konica bodies usually have a triangle notched into the edge of each frame).
# # # # #
My first DxO One (version 1, $125 new on clearance) bricked when I upgraded the firmware. Left with an inert toy while Amazon dug up another one to send me, I could not help but play with the dead one. I flew it up to the water/ice dispenser on the refrigerator. “Open the pod bay doors, HAL.” Nothing. The DxO One rotated 180 degrees so that it could eject the micro SD card into the…
“Dad, what are you doing?”
But seriously, the DxO One is one of strangest and most wonderful cameras to come out of France, or anywhere. Here’s why.
Sensor. The camera uses a 20Mp, 1″ Backside Illuminated (BSI) sensor (3x or so crop factor) made by Sony, the same one as on the RX100III. Two things make this a standout here: first, BSI sensors are quite good – meaning this returns results almost on par with the Sony a6300’s copper-wire conventional sensor. Second, almost all sensors perform equally at base ISO. In the software design, DxO biases the camera toward lower ISOs and wider apertures (which makes sense, since a 1″ sensor starts diffracting at f/5.6).
How does this compare to an iPhone XS sensor? Well, it’s almost 70% more resolution and 6.7x times the surface area (116mm² x 17.30mm²). Do the math. All the computations in the Apple world can’t make up for this type of difference in displacement. This does expose the genius of portrait mode, though – because not even a 1″ sensor is big enough to have easy-to-achieve subject isolation.
The sensor is used for contrast-detect AF (with face priority).
Lens. 32mm equivalent, f/1.8-11 aperture, six groups, six elements, with some of the weirdest aspherical shapes imaginable. It’s very tough to find a lens on a compact camera that approximates a 35/1.8. But here you are.
Far from being telecentric with an expected “folded optics” path, the DxO One uses the cellphone method with almost zero distance between the rearmost element and the sensor. The rearmost element looks like a brassiere. Like this:
The lens is happiest at larger apertures (f/2-f/4).
Storage. The DxO One accepts standard MicroSD cards. I was able to test up to 128Gb cards (Samsung EVO Plus), and it is able to read and write to them with no issues.
Power. Power comes from an internal battery but can also be fed directly from a micro USB cable. The battery takes about two hours to charge and does about 200 shots. Version 2 of the camera has a removable back door to accommodate an external battery pack DxO no longer sells. You also lose the free software (see below).
Viewfinder. Your choice of two. You can plug the camera into your iPhone, where you can use the DxO One application and the phone screen as a viewfinder. Alternatively, version 3.3 of the camera firmware turns the little OLED screen on the back into a square contour viewfinder, good enough at least to frame the middle square of the picture – and surprisingly good at estimating a level angle for the camera. You could also split the difference with a Lightning extension cord.
Connectivity. The camera was originally designed to connect via the Lightning port, but DxO enabled the onboard WiFi so that now you can use the application on the phone and control the camera (including view-finding) without a physical connection. The DxO One can also connect to your phone via your home wireless network. WiFi operation – no matter what the camera or phone – is not as much fun as it first sounds – which is why the DxO product is more flexible than Sony’s wireless-only solutions.
Software. In terms of the camera’s software, all the magic is under the hood. The camera switches on by sliding open the front cover (slide it all the way, and the Lightning connector will erect itself). There is a two-stage shutter button on the top and you can swipe up and down on the OLED to switch between controls and viewfinder and left and right to toggle photo and video. The camera stays on the exposure mode last selected from the DxO software on the iPhone.
The DxO One phone app is well-done and responsive. You can use it to frame, shoot the picture, and control what you want. Features include:
- JPG, Raw, and Super Raw (stacked) exposure modes.
- Single-shot, timer, and time-lapse settings
- Flash settings
- Subject modes and the usual PSAM modes.
- Program shift (between equivalent exposures with different shutter speeds or apertures).
- Single AF, Continuous AF, On-Demand AF, and Manual focus (manual includes an automatic hyperfocal calculation if desired).
- Matrix, centerweighted, or spot metering.
- Grid compositional overlay.
- “Lighting,” which is like a mini HDR compressor for JPGs.
You can also look through the exposures on the camera/card and move them to your phone as desired. As noted above, though, you do need to initiate wireless connections with the camera connected.
If you get a version 1 camera, new, it also comes with DxO Optics Pro 10 Elite (now Photo Lab 1 Elite) and DxO Filmpack Elite. But you have to be able to document that you are the original owner of the camera. Both of these can run as standalones or can be external editors for Lightroom. Photo Lab 1 is also capable of replacing Lightroom.
If you get version 2, you’re out of luck. But you do get a 4gb SD card and the detachable back door for that battery pack.
And either way, you do get DxO OpticsPro 10 for DxO One, which gives you a nice imaging/digital asset manager that can composite SuperRaw files. SuperRaw is a stack of four successive (and extremely rapid) exposures that cancel out high ISO noise.
And if you don’t like any of that, the DxO One outputs normal DNG files that you can simply edit to taste in Lightroom. There is a Lightroom profile for the camera’s minimal residual distortion.
Ergonomics. This is the one place where things are sketchy. It’s hard to hold onto a small ovoid object, especially one with a button on the top. I would highly recommend a wrist strap.
Upshot. Maybe not the most compelling camera at $700 plus when it came out, but now that it is a sixth of that and still a lot of fun to shoot, go for it!
Can you believe that Pullman is used for “bus” in parts of Europe? Jeez, I thought that a pullman was inherently a rail vehicle. How dare usages change! Somebody get on the Rail Transport User’s Group (RTUG) and post a philosophy question. We need to take the name Pullman back!
But really, how many hours of the waning days of old men’s lives have been wasted arguing about whether newfangled cameras grabbing electrons can be “photography” as an art or a craft? How many should? Would that time be better spent arguing about cars, finishing, guns, boats, or wristwatches?
You can spin off into the etymological argument: electrons aren’t photo + graphy because the light is not making the image directly. Or there is transformation. Or something. Reliance on ancient Greek is misguided. Photography was a neologism invented in the 19th century. It was not true to the ancient Greek then (no thing was – or is – drawing or scratching in the sense of γραφή); the 19th-century term was just an arbitrary description for what happens when light was the prime mover in the imaging process. And we have legions of words whose meanings have deviated far from what they would have meant to Greeks or Romans – or even what they meant the first time the terms were coined. Hence the weird crossover between autobuses and rail cars.
Is photography art? If you believe that, look at what the art world says. It’s all photography. That’s what museums call anything that is an image captured by a machine (film or digital) where the substantive content originates in the original image recording process. The only distinction made (and only sometimes) is for pre-silver-halide work, and even then only if it is one of the more obviously exotic (Daguerrotypes, Cyanotypes, and other things that deviate from the look of optical or inkjet paper). Odd that they don’t care what system originated the image; they only care about the medium it is ultimately expressed. Just like other things you see on the walls are “oil,” “watercolor,” “pastel,” “drawing,” etc. A “dye diffusion print” does not differentiate between originating on negatives or a Handycam.
Or maybe it’s not odd. Art requires a visible or tangible expression, and in the end, that is all that counts.
The Nikon Z7 is undoubtedly a quantum leap in Nikon’s camera evolution, essentially putting the best features of the Dxx series into a mirrorless body. Yet there is the inevitable complaint: “No dual card slot? Only one? No pro camera is like that!”
Pardon me, but plenty of pro cameras have been like that – and not just pro digital cameras in some benighted past (n.b., an era ending maybe 4 years ago). Consider the D2x and D700. Anyone want to call those “not pro” cameras? How about the flagships of the EOS fleet for a stretch?
In an era where film ruled the waves, it’s not like you could put two films into the same camera simultaneously for “backup.” And back then, pictures were scarcer and more valuable, and your chances of losing a shot due to a light leak, film defect, or development failure were astronomically high compared to anything that could befall a digital outfit.
So let’s move to digital. What is the measured malfunction rate of properly kept, brand-named CF, SD, or XQD cards? Hint: it’s astronomically low compared to the failure rate of the cameras that use them (SanDisk posts an MTBF of 1 million hours, or 114 years). Here are things that are far more likely to happen:
- Dying (which is all but guaranteed within the MTBF cited)
- Being killed in a car crash
- Being hit by lightning
- Finding a lost cousin on some genealogy site
- Winning Powerball
The threat of a bad flash card bringing down the system is simply not a real thing for most people. Dropping a camera, having a battery burn out, or suffering some physical mishap is far more likely. Even being in a car accident is more likely. And for that matter, why wouldn’t “any responsible pro” bring an extra car? An extra photographer?
I suspect that many of the people complaining about this issue — if not simply fronting to front — are semi-pros who scraped up every last dime to buy one really good camera to shoot wedding pictures. Fair enough. Maybe they had a bad experience with a counterfeit card once. Abused a good one. Ran one into the ground. It’s also possible to screw up the file system of a card by failing to respect buffers that are still clearing or repeatedly using without ever doing an in-camera format.
But this group is not positioned to speak for all pros (i.e., make the statement that “no pro would…”). Real pros in every field use redundancy – and it’s not limited to using two cards in the same camera (which does nothing if your camera is the single point of failure). Redundancy could include:
- Using smaller cards to reduce the “all eggs in one basket” effect. 32Gb is fine. Smaller media is one of the reasons that film was safe; 36 frames on a roll of film is small.
- Rotating between cards over the course of the shoot (the nice thing about EXIF is that Lightroom can combine shots from multiple cards into exactly the right order).
- Using two cameras and two cards, which means you will never be high and dry.
- Beaming your images in real time using wireless (a Toshiba Flashair is great for this, though there is no XQD version yet).
- Downloading one card to your laptop while shooting a second card.
When you consider the other options, thinking that two cards in a camera would get you off the hook seems a little odd, does it not?
Maybe the whole “multiple card slot” thing is a product of general societal economic insecurity. Or a “mine is bigger than yours” mindset. But any way you slice it, it doesn’t seem to make a lot of sense for most people.
There must have once been an awkward moment when Homo sapiens neanderthalensis saw a gangly baby Homo sapiens sapiens and wondered, for the first time, what the future would be like. The Neanderthals basically merged into the surviving human line (or were eaten — the explanation seems to vary now) — and essentially disappeared. But not before giving Europeans those nettlesome brow ridges and occipital buns.
Neanderthal shock happened sooner in the Canon world than it did for Nikon. Canon released its last mainline* manual-focus camera (the T90) in 1986. Canon did not then engage in a merging of genes but instead a lens-mount genocide. FD lenses faded fast as EOS came to rule the jungle. Nikon took a few more years to get there in 1990 with its last manual focus camera, though that camera lingered for five years on the market — and Nikon never really gave up on the F-mount. Well, not immediately. Like Neanderthals, some degree of interbreeding was available, but all that fur began to repel people after a while. All of this was 23 years ago now.
By the way, when the last newly designed Nikon MF SLR went out of production, this was dominating the disco:
Nikon would in 2001 release the FM3a, but like the contemporaneous Beatles 1 album. It was just a rehashed FE2 with a new shutter. And that was so long ago that kids born then are old enough to vote. If you were an adult excited about the release of the FM3a, you’ve probably just passed out of the “18-35” demographic, if not past the “uncool 44” milestone. But don’t worry – Nikon has your back with retro-rerun cameras like that, the S3 and SP. Because it’s more fun to reminisce with cameras that were shiny and new (the first time) before you were born.
* By mainline, I mean serious and mass-produced. Yes, Canon made a craptastic T60 and Nikon made (or branded…) the FM10, but these were cameras for developing markets or students.
Detour into how Nikon’s product strategy: so many cameras
It would not be a Machine Planet article without a detour into some kind of editorial, and here is one: digital cameras did not usher in the age of meaningless upgrades and gimmicks designed to excite camera buyers into “one more body.” Film SLRs were the greatest feature-chase of them all: the lenses and the film are the ultimate determinants of performance on a film camera; everything else is metering, motor, and in some cases autofocus.
Consider that in 1980-1985, Nikon fielded five prosumer cameras based on the same platform (FM, FE, FM2, FE2, and FA), at the same time it fielded three based on an intermediate architecture (EM, FG, and FG-20), and a next-generation intermediate (N2000/F-501). All of these variations revolve around binary features/exclusions: needle meter or not; matrix metering or not; internal motor or not; program mode or not. And you thought Sony had a short attention span?
To be fair (why start now?!), by the sunset of Nikon’s manual focus cameras in 1995, post-processing was out of the reach of most people. Photoshop was at version 3 and barely able to handle the tasks it routinely handles today (it also fit on 5 Mac floppies…); scanners were insanely expensive; and if you had a bad slide, you were out of luck. If you had a bad negative, you were mostly at the mercy of Candice at Fox Photo to maybe run that one neg through the Fujitsu at N-N-N-3 instead of N-N-N-N (this person actually existed, was roughly my age, and was quite cute).
Even when Nikon made the jump to autofocus, this proliferation continued, with performance carefully meted out between models that used the same AF module (consider that the N50, N70, N4004, N5005, N6006, N8008/s, and F4 used the same module – with outcomes so different, you have to wonder what they were holding back.
But what was going on with the lenses?
Nikon’s lenses had a more tortured history that got off to its first wrong turn when Nikon started releasing metered prisms. That would have been the time to revise the mount to include aperture information (relative and maximum). Almost the entire subsequent drama of Nikon lenses was a product of trying to fix that: prongs, AI, AI-s, CPUs. When the Photomic metered prism came out in 1962, Nikon already knew that it was enough of a market force that it could have moved to a meter coupling in the body without losing its user base. For six long years, Nikon’s meter prisms required the user to set the maximum aperture of the lens on the meter, manually.
Actually, that didn’t just stop in six years. In 1968, Nikon introduced the FTn finder, with its semi-automatic indexing: mount the lens; turn the ring right, turn the ring left, done. The kludginess of this solution was only more glaring when companies like Konica were releasing lenses that could transmit maximum aperture information with a pin on the back of the lens (as opposed to a poky thing screwed onto its aperture ring) and using irises that were consistently linear, so as to allow automatic control of the iris. Granted, shutter priority did not predominate as a single-factor autoexposure method, but the point was that Nikon was well behind the curve. By 1971, Canon’s pro bodies had moved the meter cell to inside the body and were transmitting relative aperture position invisibly.
Nikon’s Aperture-Indexing (AI) lenses did away in 1977 with the prong, song, and dance because they fit cameras that only needed to know how many stops the selected aperture was away from wide-open. If anyone knew what the max aperture of the lens was, it was the user – not the camera. AI was in a way a step backward from the FTN, since it was only a system for transmitting relative apertures. And AI-only bodies turned out to be the full-employment act for repair people and machinists – because mounting an old lens on an AI body, absent modifications to the lens, the mildest of which was a new aperture control ring, would cause damage. AI ushered in a tiny doubled aperture scale, the Aperture Direct Readout (ADR) that some cameras could display in their viewfinders via a wedge prism, like the F2AS, F3, FA, F4, and F5.
The next iteration, AI-s (1981) brought Nikon almost up to date. It finally added a maximum aperture indexing pin to lenses (as well as a pin that transmitted the focal length to the camera. The only camera to fully implement this scheme was the FA, for its program and shutter-priority modes. There were three implementations of AI-s:
- The FA used the full AI-s protocol for AI-s lenses, going open loop when shooting AI-s lenses (because it knew the maximum aperture, focal length range, and stop-down rate) and selected a program based on focal length. It went closed-loop when shooting AI and AI-converted lenses. By “closed loop,” I mean the camera reads the scene, stops down, takes another reading, and finally fires.
- The FG and its replacement the N2000/F-301 all used a similar open/closed-loop setup, except these cameras could not read the focal length via the pin and thus only used one program (or one selected by the user)
- The N2020/F-501 would act like an N2000/F-501, but it could switch to P-Hi from P-Auto when a CPU-equipped lens with a longer focal length was mounted.
Of course, with closed-loop exposure, the only value of AI-s is purely informational; the FA and FG/N2000 systems don’t really need to know maximum aperture to work. And when it comes to “Program” operation for AI lenses, is it really programmed in the sense of a neat little graph – or is it shutter speeds programmed against apertures stopped down against the maximum?
A tale of two cameras
Nikon’s technological peak came with the FA, pretty much the most sophisticated camera anyone had ever seen. Four (count ’em!) exposure modes – Program, Aperture, Shutter, and Manual, all powered by two MS-76 cells. Matrix metering with any native AI lens. Program shooting with any AI-s lens. LCD display in the viewfinder. And… it wasn’t quite ready for prime-time, developing a reputation for having flaky electronics and poor matrix metering. Or so people say.
In 1990, the successor to the FA, the N6000, hit the scene. The N6000 kept most of the FA feature set but swapped in some new features. Incoming ones included:
- A 2 fps internal motor drive to replace the bulky MD-15
- Auto film loading
- Power film rewinding
- Auto bracketing
- Slow and rear-curtain flash
- DX code reading
- Automatic balanced fill flash
- An “analog” (graphic) over/under-exposure display that pops up in manual mode
- Exposure mode indicator in the viewfinder
You could argue that the N8008 was the successor to the “technocamera” FA, but the N8008 was an autofocus camera. Or you might have argued the F4, which is a cross between an F3, an MD-12, and an FA. The departures with the N6000 were somewhat less notable:
- Elimination of interchangeable focusing screens (which were apparently not a popular feature of the FA)
- A new reliance on CPU lenses (AF and AI-P), which allowed the correct aperture to show in the viewfinder without an ADR display
- Loss of program mode for AI-s lenses (due to CPU dependency)
- Loss of matrix metering for AI-s lenses (same)
- Loss of a mechanical shutter speed
- Loss of 1/4000 sec on the shutter
- Change from MS-76 button cells to the somewhat less common CR223A/CR-P2.
But for all intents and purposes, this was “it.” Although Nikon continued to sell (not make) the F3 into the mid-2000s, the only newish manual purpose-built manual focus design was the FM3a, which is little more functionally than an FE2 with a shutter that could also be governed mechanically. It also followed a six-year period in which the N6000 was off the market.
On Earth-399, Nikon made manual focus cameras from 1959 to 2270. But that is also the universe in which “George Washington freed the slaves… Abraham Lincoln was regarded as the father of his country… and George Custer became president of the Indian Federation.” (“Superman… you’re DEAD… DEAD… DEAD,” 1971).
First in/last in (F3AF/F3)
Nikon had always managed to be both early and late to the AF party. The Nikon F3AF emerged in 1983, just three years into the F3 era. In fact, it came onto the scene at the same time the DE-3 High Eyepoint finder came out (this is the thing that makes the F3 into the F3HP, the most popular variant). The F3AF was the first camera to use electronic contacts to control lens focus, using a contact system that is eerily similar to current Nikon lenses – but with a motor-in-the-lens implementation that most people came to associate with Canon. The manual focus version of the F3 proved wildly more popular and became one of the longest-running Nikons in history, with a 20-year run. That is catalog time, not necessarily production time. When it was time for the F4, Nikon was playing catchup with Minolta and Canon on AF, whose amateur cameras were upping the stakes.
The forgotten Nikons (N2020/N2000)
In 1984-1985, just after the F3AF, Nikon made another pair of cameras, one with AF and one without. These were the N2020 (F-501) and its value-engineered little brother, the manual-focus N2000 (F-301). These were essentially a motorized version of the FG. According to lore, the N2000 was a last-minute decision from the accountants. That’s believable since it allowed the company to drop the FG and make two cameras on a common set of tooling. But it cannot actually be true, because the N2000 was the first of the two cameras to be released – and by a year.
Rather than the interchangeable screens of the N2020 (B/E/J), the N2000 had a fixed K screen (split prism plus micro prism collar), a LED shutter speed display (but no AF indications), and no automatic selection between programs (on the FA, this had required a post on AI-s lenses; on the N2020, it required a CPU to tell the camera the focal length). Common to both cameras, though, was a traditional control layout, a coreless drive motor for film advance, auto-loading, an exposure compensation dial, DX coding, plus pretty much everything the FG had – save the +1.5EV backlight button (the N2000/N2020 had an AE lock button that served much the same purpose). One mystery is why the N2020 was typically sold with an AAA battery holder rather than the N2000’s AA – since it is fairly obvious that the battery chamber was designed around AA. The smaller batteries required a special inset tray. But on the plus side, they do shave some height and weight off the assembled body. And the N2000/2020 is a pretty heavy body.
The N2000 is a camera with a level of elegance that we forget about: a large, bright, spartan viewfinder, a normal control layout, and a certain fluidity of shooting. Motor drives can be very important if you are left-eye dominant. Plus normal batteries that you can buy anywhere. Plus it has nice, sharp edges. It’s just not a camera that has the simulated chrome that is so popular with “the kids today.” And yes, by simulated, meaning that pretty much every “chrome” camera post-1980s has plastic covers.
But what about the N6006/N6000?!
The N6006 is something of a hidden gem in the Nikon line; it has most of the things you like about the N8008 (sans 1/8000 top speed, AA batteries, and high-eyepoint finder) in a smaller package. It is actually pleasant to shoot, though it does carry the stigma of using 223 lithium batteries. That might have actually made a difference a few years ago, when you could walk into a drug store and buy CR123As and 2CR5s, but today, all lithium batteries are more Amazon than the corner store.
The N6006 is one of many Nikons that share the AM200 AF sensor array (the others being the N4004/F-401,N5005/F-401s, N8008/N8008s/F-801/F-801s, and the F4. As you might have surmised from the AF performance differences in these bodies, CPU speed and motor torque are huge determinants of speed. The F4 is tops in both CPU and motor power, and the N4004 has the smallest brain and smallest muscles. The N6006 and N8008 are mid-range, and the N8008 has a more powerful motor.
The little brother, the N6000, loses some functionality compared to its AF twin: no spot metering (because that comes from the AF module), no built-in flash (spite?), and a slightly smaller LCD display (that omits the AF confirmation dot, obviously…). But all the same, it is much smaller and lighter. Oddly, it still does support (or for P and S, requires) CPU lenses. As an adjunct for occasional manual focus with otherwise-AF lenses, it is fine; in fact, examples of the N6000 sell for less than the price of any manual-focus-friendly interchangeable screen for any SLR or DSLR. So I would ask, are you better off…