The Kobayashi Maru test is not a test of character unless you see the world in terms of “go down in dignity with the [star]ship” or “be a coward.” Or whatever Nick Meyer thought the outcomes would be. Captain Kirk won the test by not accepting a binary decision tree. This is exactly how you should approach any problem that looks like it is unwinnable. Rewrite the simulation. Use a screwdriver as a chisel.
One of the ways you can do this is to ignore the process as presented completely, decide your goal state, and then selectively use whatever is available to get there. Face recognition is exactly such an exercise. Adobe would have you select one of two suboptimal tools (Lightroom Classic or Lightroom) and have you build out the recognition process and leave it in the platform where it started.
Not believing in the no-win scen-ah-ri-o (sorry, Shatner), I started with the first principle:
What is the purpose of face recognition in photos?
This is actually a really good question. The way the process on Lightroom proceeds (either version), you think the purpose is to name every person in every photo and know what precise face goes with every name. This view assumes that you are a photojournalist who needs to capture stuff. You will go bat crazy trying to achieve this goal if your back catalog is hundreds of thousands of pictures and you use Lightroom Classic (“Classic”) as your primary tool.
Let’s face it – you are (at least this year) a work-at-home salary man, not Gene Capa. The real utility of face recognition is to pull up all pictures of someone you actually care about. You need it for a funeral. For a birthday party. For blackmail.
That does not actually require you to identify precise faces, just to know that one face in the picture is the one you want. You already know this person’s name and how the person looks. And even if you didn’t remember, a collection of pictures of that person – no matter who else was in or out of the shot – would have one subject in common. You would know within a few pictures who John Smith was.
Taking this view, a face identification is just another keyword.
It’s not even 100% clear that you would ever need it done in advance, on spec, or before you had a real need to use it.
What do we know about face recognition in LrC vs LR?
Our statement of problem: 250,000 images of various people, some memorable and some not. I want to get to being able to pull up all pictures of John, Joe, Jane, or Bill. And I want this capability to last longer than my patience with Lightroom cloud. I want to be able to ditch Lightroom, even Classic, one day and change platforms without losing my work.
When you are figuring out a work flow, or trying to, it’s helpful to consider what your tools can and cannot do; hence, with Classic and Cloud, start breaking down the capabilities.
- Both recognize faces with rudimentary training.
- Cloud is much faster than Classic and tends to have fewer false hits (due to Sensei)
- Both can do face recognition within a subset of photos.
- Classic can an apply keywords to images that Cloud can see/
- Cloud cannot create keywords that Classic can see.
- LrC has better keyword capabilities, period.
- You can make an album in Cloud and have it (and its contents) show up as a collection in Classic.
- You can put things in one of these items in either program and have it show up in the other.
Do these suggest anything? No? Let’s step through.
Let’s talk about some preliminaries that no one ever seems to address.
Order of operations. If you are starting from zero, you should identify faces in the import every time you import something. Not only are names of near-strangers fresher in your mind, it also prevents the kind of effort we are about to explore.
What’s my name? You must have a naming convention and a normalized list of names. It doesn’t matter whether you pick someone’s nickname, real name, married name, whatever. Whatever you decide for a person must be treated consistently. Is my name Machine Planet? Planet Machine? PlanetMachine? This has implications for Classic, where you can’t simply type a two-word name (Bill Jones) into the text search box without getting everyone named Bill and everyone named Jones. For that you might want to concatenate both names together (unless you want to use keywords in the hierarchical filters). In Cloud, the program can sort by first and last name, so there is value in leaving these separate.
Stay in the moment. Although you might be tempted to run learning against every single picture you have at once, this leads to a congested Faces view (or People view), slow recalculation on Classic and a lot of frustration. Do a day or a week at a time. Or an event. This will give you far fewer faces from which to choose, and fewer faces to identify. Likewise, if there is a large group picture in the set, focus your effort on tagging everyone in it. This will set up any additional Identified People in Classic and will kickstart Cloud.
Who’s your friend? You next need to decide who is worth doing a lot of work to ID. You are not going to do iterative identification (especially on Classic) with people you don’t care about. Leave their faces unidentified. Or better yet, delete the face zones. This is a very small amount of effort in a 200-shot session or a 36-shot roll of scanned pictures.
Start in Cloud. This part is not intuitive at all. Go ahead and sync (do not migrate!) all your pictures to Lightroom mobile. This consumes no storage space on the Adobe plan. If there are a lot that have no humans, use a program like Excire Search to detect pictures with at least one face pointed at the camera. This is a reasonable cut, since there are few pictures you would bother tagging that have one face, solely in profile.
The synch process will take forever. I don’t think there is a lot of point in preserving the Classic folder structure when you do this; I would just make a collection like “Color 2000-2010” in the Classic synched collections and dump your targets into that (n.b. a collection in Classic is just an alias to your pictures; making a collection does not change the folder arrangement on your computer). We are only using Cloud for face recognition; its foldering is too rudimentary and inflexible to be useful – although right-clicking in Classic to make folders (or groups of folders) into synched folders will let you adopt the Classic organization in Cloud, albeit flattened, without re-synching. Again, not very useful. Also, for reasons described further on, you want to have a relatively clean folder panel in Cloud because you will be making some albums, and you don’t need extra clutter.
Ok. Let the synch run its course, or start your identification work on Cloud as it goes. Cloud will start aggregating what it thinks is the same face into face groups, which you then must name. Start naming these according to the convention you chose. I would put the People view to sort by “count,” which naturally puts the most important people at the top (you have the most pictures of them). Let’s say you name one face group “John Smith.”
The process so far is pretty generic. To start crossing things over to Classic, you need to make folders (“albums”) in Cloud. Start with one per important person (“___ John Smith”). Search for that person. Dump the search results into the album. You can always add more later.
Now flip back to Classic. You will see collections under “From Lightroom.” Voilà! One of them is “John Smith.”
Now you can do one of two things.
You can simply make a quick check to make sure there are no pictures included that obviously are not John Smith. But after you do that, or not, you can mass-keyword everything in that collection “John Smith.” If you named John Smith consistently with any pre-existing Classic face identification of John Smith (i.e., not two different variations of the name), your searches will now have the benefit of both tools. Save those keywords down to the JPG/TIFF files (Control-S/Command-S) or XML files (same), and you will forever have them, regardless of whether you leave the Adobe infrastructure. In fact, many computer-level file indexes can find JPGs and TIFFs by embedded keywords (which the index sees as text).
Congratulations. Now you’ve highjacked Sensei into doing the dirty work on Classic.
With a small but not overwhelming amount of creativity, you could use a technique like this to cross-check your past Classic calls.
STOP HERE AND GO TO “CALIBRATING YOUR EFFORTS” UNLESS YOU ARE A MASOCHIST
Second, if you’ve missed your OCD meds, you can also use the results of this to inform your Classic face-recognition process.
a. Select this “From Lightroom–>John Smith” collection and flip to Faces view in Classic.
You are now seeing all “Named People” and all “Unnamed People.” Unnamed people are shown by who Photoshop thinks they are most likely to be. You can sort Unnamed people in various ways, but however you do it, you want to get John Smith?s in a contiguous section where you can then confirm or X out. By going into this in the From Lightroom–>John Smith collection, you are not waiting for recalculations against every photo you have – just the ones that Sensei thought should have John Smith.
So the cool trick is this: if you see 106 pictures in From Lightroom–>John Smith, then you know you are probably going to be done when you have 106 confirmed pictures of him. Or done enough. John can only appear in a picture once. There will be a margin of error due to how closely Classic can approximate Sensei, but you can get to about 90% of the Sensei results without a lot of trouble. This is a bit better than Classic on its own, where more pictures of John Smith at an earlier age might be really buried down in the near matches. Further, Classic is something of a black hole for similar pictures because unlike Cloud with Sensei, there is no minimum required similarity score to be a suspected match.
b. You can, of course, drill down on John Smith as a Named Person. You don’t have much control over how “Similar” pictures are ordered (I believe it is degree of match for the face), but here, you can confirm a much more concentrated set (after you decide how to deal with the “fliers” who are not John Smith).
One other technique I have developed while in the “confirming” stage is that it may be easier to confirm en masse (even if some are wrong) down to the point where the “not John Smiths” are about a third of the results in a row of Similar faces. A small number of “fliers” can be removed by going up to the Confirmed pictures, selecting them, and hitting delete. Trying to select huge swaths of unconfirmed faces in Similar and then unselecting scattered fliers tends to really slow things down. As in Classic really slows down as it tries to read metadata from everything you selected.
Incidentally including and then manually removing a few fliers from Confirmed does not seem to affect accuracy (because every recomputation of similarity is on the then-current set of Confirmed faces – changing that set changes the computation). If you have 99 pictures that are right and one that is wrong, it won’t even change the accuracy appreciably. If in Confirmed, you have 995 pictures that are John Smith and 5 that are not, again, the bigger set of correct ones will predominate future calculations.
Next, at some point, especially with siblings, Classic is going to reach a point where Jane Smith (John’s Sister) is going to show up as a lot of the “Similars” with John Smith. When this happens, go back to Faces (top level, always within From Lightroom–>John Smith), click on her, and confirm a bunch of her pictures. When you go back to Named Person John Smith, a lot of the noise will be gone, and hopefully more John Smiths will be visible in a concentrated set you can bulk-confirm.
Crossing back (optional)
I did write “iterate,” right? You might want to keep your Cloud face IDs as complete as possible, since there is not 100% correspondence between results from the methods used by the two platforms. This is relevant if you have already trained Classic on John Smith.
- In Classic, note the count in your From Lightroom–>John Smith collection. Say it’s 106 pictures.
- Do a search from your Classic Library for all pictures of John Smith. If you used a space in the name, add Keywords to the field chooser menus (via preferences) and select that line.
- Drag all of those results to From Lightroom–>John Smith.
- Flip to Cloud. They are now in that “John Smith” album. Or they will be when it synchs.
- Select all the pictures in the “John Smith” album.
- Hit Control-K (or Command-K) to bring up keywords and detected/recognized faces in the “John Smith” album.
- Now name any faces that are blanks – but should be John Smith.
- Now from the All Pictures view, search for John Smith and drag all his pictures to the John Smith album.
- In Classic, check your count. If it’s say 128 pictures, now, that means that Cloud took your examples and found more John Smiths. And now they are ID’ed in Classic as well.
- Switch to Faces and confirm the 22 additional faces as John Smith. Now both systems have identical results.
Calibrating your efforts
For searches for random people, Cloud is still the best because it requires very little training. That said, for randos, you are using a tool that does not give you any permanent results. That’s probably ok for people who you don’t really care about. Or if you plan to be on Cloud forever.
For close friends and family, you may just run the “Crossing Over” exercise. I would do it in groups: do a bunch of albums on Cloud (say seven people), then do a bunch of naming on Classic (their collections), etc.
If you are really a neat-freak or compulsive, you could use the “Crossing back” step. But Sensei is reasonably good at what it does, so the marginal effect of adding Classic results to Sensei may not be much. If you have Excire, you might use it to find pictures that look like a picture of John Smith, which will give you a third means of concurrence.
The thing to remember about face recognition is that it is miraculous but also imperfect. It has to detect a face and then it has to identify a face. It doesn’t see how you see. Efficiency works at cross-purposes to accuracy.
But it is still vastly better than trying all of this on your own.
So here is a question: what’s the best way to catalogue and tag your pictures? Is it Lightroom Classic? Lightroom Cloud? Is it Apple Photos? Is it something else? Maybe it’s a lot of things. If you are a high-volume imaging-type person, you’ve probably wondered how to deal with things like tagging people. The most macabre application, of course, is the funeral collage. But say you have tens of thousands of pictures of family members and want to print a chronological photo album. Then what? Face recognition features in software may be your best bet. From a time standpoint, they may be your only choice. The problem is that different software has different competencies.
Something like Photos is designed to group pictures, more or less automatically, around people, events, dates, or geography. Think of it as your iPhone application on steroids. Photos is not big on user control. It is not even engineered to do anything with folders except display them if that’s how photos were imported.
Face recognition in Photos is incremental and behind the scenes: it only finds faces when you are not actively using the program, and over time, it batches up groups of pictures which you confirm or deny as a named person in your Faces collection. To establish your Faces collection, you have to put names on faces in a frame where faces have been detected. This tends to mean that face recognition proceeds by which faces the user thinks are most important. As it should be.
Unlike Lightroom, Photos does not presume that detected faces are unique. It applies a threshold such that if it detects Faces A, B, C, and D, and they are close enough, they are treated as the same (unnamed) person. As such, naming one person can have the unintended effect of tagging a bunch of false matches. Either way, you can error correct by right-clicking the ones you see that are wrong.
My assessment of Photos is that it is not suitable as a face-recognition tool if you have hundreds of thousands of images, for several reasons:
- Its catalogs are gigantic, even if you use “referenced” images. Photos loves it some big previews, no matter what you do. For scale, my referenced Photos library is 250gb where my entire Lightroom Classic library folder is 40gb (both excluding original image files – so Photos sucks up 6x the space).
- The face recognition process appears to be mostly (if not completely local), it runs in spare processor cycles, and in my experience, can cause kernel panic. Hand-in-hand with this is the fact that you can never actually turn Photos off. It’s part of MacOS.
- There does not appear to be any indication that Photos actually writes metadata to files. So when you move to a new application, you’re starting from zero.
- You can’t really use it in conjunction with a grown-up asset management system like Lightroom.
Photos is, however, good for generating hilariously off-base collections of photos (memories) with weird auto-generated titles (“Celebrate good times” with a crying baby as the cover photo). Or collections based on the date a bunch of pictures taken over decades were scanned (such as my 42,600 pictures apparently taken on December 12, 2008). I actually have no idea where these are generated. But they are funny.
I’m sure Photos is really good for those funeral collages, though.
Lightroom Classic (LrC)
Something like Lightroom Classic (LrC) is designed around manipulating, filtering, and outputting large numbers of pictures at once. This is, indeed, the killer app for handling large volumes of photos, and becomes a single interface for everything. It’s OK, but not great, for face recognition.
To put it mildly, LrC’s face-recognition is processor- and disk-intensive. The best way to use it is to use it on a few hundred photos at a time so that your identifications don’t swamp everything in your collection in a recalculation. LrC is good at showing you different faces all at once, as single images, so you can get cracking on identifying as many new “people” as you have patience for in one sitting.
The top level of the Faces module shows you (i) “Named People” and (ii) “Unnamed People.” You need to name at least one “Unnamed” person to start. After a while, the system will try to start putting names on “Unnamed” people. If you have a Named person named “John Doe” and are presented with an image that is “John Doe?” you can click the check box to confirm it and the X box to remove the suggestion (clicking again removes the detected face zone, such as if the system mistook a 1970s stereo for someone’s face).
Once you have done that, you can drill down on a “Named” person to see what pictures are “Confirmed” and what pictures are “Similar.” Again, to move from Similar to Confirmed requires an affirmative call. Here, you only get a check box. There is no “Not John Doe” option, which means that every possible match is shown, ranked in what LrC thinks is similarity. This is actually problematic because as you confirm more pictures, the number of “Similar” pictures rises exponentially. This puts a huge computational drag on things.
Wherever it happens, confirmation of a face’s identity is an affirmative process that is repeated for each picture (you can select several). This prevents false IDs based on grouping disparate real people into one “face,” but it also makes tagging excruciatingly repetitive. And slow. Highlighting faces to group-confirm or identify can have the “highlight” lagging far after your click. And God help you if you click six pictures and then try to type a name into one to rename all six. It works about half the time. The other half, it auto-completes with a totally unintended name. If you accidentally confirm the wrong face for a given name, you can highlight the errant thumbnail and hit Delete (this is not well documented).
Critically, the top level of the Faces module (where you see all named people as thumbnails) is the only place where the system puts a “most likely name” on unnamed people. Otherwise, looking at any particular “Named Person,” the same person – Bob – might show up as a similar for John Doe. And when you switch to Richard Roe, Bob will show up as a “similar” for him as well. This is part of the reason why people for whom you have 10 actual pictures always show up with 20,000 “similars.”
A big advantage of LrC over other solutions is that you can see and tag faces within specific folders, collections, or filmstrips. This lets you make context-sensitive decisions about who is who. For example, I am pretty sure that my kids did not exist in the 1970s. Or I might know that only 6 people are represented on a single roll of film that constitutes a folder in my library.
When a name is confirmed on a picture, that name is written as a keyword to the metadata in the library. It appears that XMP files (if you chose that option for RAW files) are written with the actual coordinates of faces in the picture, which allows some recovery if you have to rebuild a library from scratch. The important thing is that a picture is keyworded with the right names. Face zones are nice but not quite as critical in the long run because in reality, you only really care whether a picture contains John Doe or Richard Roe, not which one is which in a picture of both.
Always save your metadata to files if working with TIFFs/JPEGs/scans (Command+S) or “always write XMP” with RAW camera files. This helps keep your options open if you want to get divorced from Adobe. Or if your Lightroom library goes wheels-up and you have to rebuild from zero. There is no explanation for why this program just doesn’t write an XMP for every file. It would make things easier.
Lightroom [CC or “cloud”]
What a hot mess. The only thing that really works about Lr CC is face recognition. The rest of it is a flashy, underpowered toy that despite being “cloud” based can still consume massive amounts of hard drive space and processing power. If your photos are in the Adobe cloud, or synched from LrC, the program works with smart previews.
Adobe’s Sensei technology is a frighteningly good face-recognition system. In the People view (mutually exclusive with the Folders view), it takes all of your photos and groups them according to what it thinks is the same face (like Apple Photos). Put a name on that face, and it might ask you if this other stack over here is the same face. It is extremely fast (because it runs in the cloud). Sensei can also identify objects, and to some degree, places in photos. Naturally, the most important people in your life have the highest counts, and you can sort unnamed faces by count and work your way down. Things break down when 400 people have 15 pictures apiece, though…
The system, though, has some amazing limitations that are pretty clearly engineered in by a company that is trying to move everyone to its walled garden. Two of these four bear directly on the issue of why a hard drive – and keeping your own metadata local – is your ladder out of that walled garden.
First, metadata transfers to Lr are one-way. The program can absorb keywords applied in LrC, but not recognized faces/zones, and nothing you input in Lr can ever rain down on LrC. There is no programming-related reason that prevents metadata from flowing the other way, aside from intentionally engineering this out of being possible — so that you are eventually forced to store all your stuff in Adobe’s per-month-subscription storage space. Because paying a monthly to use programs that aren’t really being updated – like LrC – was not bad enough.
Second, you cannot force face recognition on arbitrary subsets of your library, at least very efficiently or intuitively. If you came at this program assuming that it would be like LrC, you would conclude that there is no way to do this. Instead, you have to select a group of pictures and hit Command/Control-K (for “keyword” – how intuitive…) to see the faces present in the picture or group. Lr then shows you the single picture with the face boxes – and the collection of faces in the picture on the right panel. This is great – but why is it so hard to find? You also get the impression that when you do this, the face boxes are generated on the fly. But the critical defect here is that the “named faces” that are shown as thumbnails are even smaller than the other face thumbnails in Lr.
Third, when asked to “consolidate” two faces, there is no way to flip between the two collections. This is an oversight – you are not asked to name a person based on one photo, but for some reason you are asked to make a consolidation decision that could have catastrophic consequences — based on a single fuzzy thumbnail. If in doubt, sit it out.
Finally, you can’t push face recognition data back down to LrC. So if you use LrC, you basically end up with completely separate face-recognition data sets based on the same photos. This is a big-time fail.
Well, in terms of applications you can access for a Mac right now, the options are ok – but not great. Stay tuned for Part 2, in which we look at a way to leverage LrC and LR CC against each other to speed things up.
Is there a problem here?
Nikon packages terrible directions with the standard medium format holder for its high-end scanners. Rather than going crazy with your FH-869S and pining for an FH-869G glass carrier,* let me suggest the following to maximize the usefulness of the medium-format (“Brownie”) carrier that came with your Nikon LS-8000ED or LS-9000ED.
* There is nothing wrong with a glass carrier except dust, inconvenience, skewed negatives, expense, rarity, and a tiny amount of overall resolution loss from the antinewton glass. For some negatives (panoramic, warped, etc.), they are indispensable.
A better way to use your glassless holder:
1. Make sure the rubber grip strips are clean. This is crucial – and probably responsible for most of the complaining about the standard carrier. Clean them with a cotton swab and the alcohol that comes with a cassette tape cleaning kit (or Radio Shack “Non-Slip Fluid,” 44-1013). DO NOT touch the strips with your fingers afterward. Even your skin oil can make them too slick to work.
2. Turn the carrier so that the open-close slider is on the bottom and the end that enters the scanner is on the left side (see the picture at the top). This is going to establish the orientation that you will need for the rest of these directions.
3. Use your forefingers to open the gripper latch at the top. Position the film so that it “corners into” the end of the carrier with the two prongs and the film channel at the top. The end of the filmstrip should be fully supported. Now push the negative strip up toward the ridge at the top of the channel underneath the gripper latch. Get it as even as you can (and it should be possible to get it very, very even). Snap the latch down.
4. Make sure that the open-close slider at the bottom (the one with the “Pac Man” symbol) is in the rightward (“open”) position. Open the bottom gripper latch. Slide the bottom gripper assembly upward until the film edge uniformly contacts the ridge. Be aware that the gripper assembly can be rotated slightly around the open-close slider. You will probably not be able to get it perfect, but the beauty is that you don’t have to. When you have it as close as you can, snap down the film latch.
5. Now gently pull the bottom gripper toward you. Note again that it still pivots around the open-close slider. Get it tight and pivot it until the entire film is flat. This gives you a last chance to make sure that the film is evenly tensioned.
6. While holding the gripper assembly in position, use the last couple of fingers of your strong hand to push the slider left, to the closed position, to lock things down.
7. Run over the film with a rocket blower.
9. Stop complaining about this carrier.
Yeah, we still have it. Not the magic touch, but the scanner (with the magic touch). Potions of the below appeared on dantestella.com years ago; I have added some updates and new notes on light sources, a subject on which there is tremendous misinformation on the ‘net.
What is a Pakon?
The company is best known for its plastic slide mounts, which in the old days you would buy to fix the cardboard mount that your projector mangled. But as division of Kodak, it began to produce minilab scanners (the F135, F135 Plus, F235, F235 Plus, and F335).
Many people are familiar with the Pakon F135 and F135 Plus, which have become very popular as tabletop scanners. What makes these scanners genius is that they do scanning on one pass, without annoying prescans or the rat-a-tat-tat of stepper-motor driven film scanners.
The PSI software is even more ingenious. Basically, you feed it a roll of film, and:
- It can take strips of film up to and including a 40-frame uncut roll.
- It scans all of the frames as a bitstream image at rates in the hundreds of frames per hour, with Digital ICE turned on.
- It uses DX codes on the film to determine the frame number and applies that to the filenames of the resulting files (JPG, TIFF, or RAW, to your preference)
- It automatically finds the frames, DX coding or not. On its software, you can adjust framing after the fact.
- It quickly and with astonishing accuracy corrects color and exposure, even on frames with exposure errors or fading.
- It spits out all of the files, in sizes up to 3000×2000 (this is a 2000dpi scanner) onto your output drive or media (some earlier models require software fixes to output at this resolution).
- It does not require a special console, just XP (real or emulated) with an unformatted N partition on the boot drive. You install the software and go to town.
If you are feeling especially technical, you can use the TLXclient software, which allows different bit depths, full-out-to-the-edges framing, unusual frame sizes (you can scan individual half frames or Xpan frames – or output them as full-resolution strips), and many other things. It comes into play more, one would surmise, if the Pakon is your only scanning machine.
How is a Pakon different from other negative scanners?
This minilab scanner differs from your Coolscan in a few key ways.
First, they are designed for speed. An F235 Plus, for example, will do 800 frames an hour at 3000×2000 resolution. Yes, that’s 33 rolls per hour, or a roll or 24 frames about every two minutes. Most people would burn through a lifetime of black and white 35mm negatives in a few days of work. The 135 series runs at about half that speed with ICE off.
With Digital ICE turned on, the 235 Plus still does 400 frames an hour. Reduce the resolution to one of the lower settings (such as what you would use for web-sized pictures or 4×6 prints), and it really flies. Part of the speed comes from obviating negative carriers, the cumbersome and relatively fragile part of any consumer-grade scanner. The rest is dispensing with the prescan, which introduces more complication in the process.
These are the relative speeds of Pakons vs each other (Digital ICE off / Digital ICE on) for a maximum-resolution scan. This is per hour. Loading film in strips slows this down slightly. This is the order in which the machines were released:
- F235 (400 / 250)
- F235 plus (800 / 400 )
- F135 (293 / 220 @ 1500 x 2000)
- F335 (1053 / 790)
- F135 plus (477 / 387)
One thing that is clear is that the speed of Digital ICE processing ramped up to where it was very close to the limit of the scanning speed. But that is of no moment if your life is all silver b/w or Kodachrome, where dust and scratch removal doesn’t work.
Second, Pakon scanners are designed for a minimum of human intervention. Despite the availability of an SDK for this scanner, the proprietary PSI software is the only fully finished piece that will run this scanner. This software, by the way, is brilliant in its simplicity. Even in “advanced” mode, it has only a few settings: what type of film (color, b/w, slide), how many frames per strip (4, 5, 6 or many), whether you want Digital ICE on or off (color only), and the roll number that will become the name of the folder when you save the roll. That’s it. The machine scans as much film as you want to give it, figures out where the frames are, does all color corrections without human intervention (unless you want to participate) and kicks out your choice of output (3 resolutions, JPG or TIFF, RAW or processed). It even reads the DX codes off of the film and gives each frame the name of the nearest barcoded frame number. Brilliant.
Buried in your program folder is something called TLXclient, which you can use for oddly-sized frames (such as half-frame 35mm and Xpan). It’s a little more geeky, but it lets you play with wide frames (a lot of the time you can get black all the way around a 24×36), play with the bit depth, and do other things that aren’t really central enough to PSI’s minilab mission.
Finally, the Pakons not generate information that you do not need. The first thing that a gearhead will look at is the scanning resolution. The maximum is 3000×2000 (6MP), which is an acceptable resolution for an 8×12 on a dye sublimation or inkjet printer, if not a Frontier.
“Wait? 4000dpi!” Most 35mm pictures don’t get enlarged more than 8×12, most in fact are just shown on computer screens these days, and for situations where you need to, you can always use a high-end desktop negative scanner, pay to play on Flextight, or have your work drum scanned.
If all you are out to do is quick proofs to see what is worth scanning with a much higher resolution machine – and just want 1500×1000 thumbnails, the F235 plus blows out up to 3,000 of those an hour, or roughly one per second. You would need a spotter to catch the negatives flying out of the machine. And a second helper feeding it.
But let’s be real here. If you for some reason believe that you need to scan every single picture you have, you will never get it done on a normal negative scanner that runs with a carrier and Vuescan or Silverfast.
What’s different about the F235 Plus?
The 135 and 135 Plus have a “dog bowl” form factor in which film travels in with the sprockets at top and bottom, around a curve, and out the other side. Negatives end up neatly in the tray. They are not as fast as the 235 and 335 series machines for a number of ergonomic reasons in addition to the slower transport speed.
The 235, 235 Plus, and 335 use a larger chassis (about the size of a large bread making machine) and take film straight through and out the back into a negative bin made of Lexan. The 235 Plus and the 335 are the speed demons, with the 335 — exceptionally hard to find in working order — edging out the 235 Plus by 20% with no ICE and almost 100% with ICE. They can take shorter strips of film than the 135 – down to two frames – though you may want to use a chopstick to nudge the strip to engage with the sprocket rollers.
But the real difference with the 235 Plus is that it uses a halogen light source and not an LED. Many people have made uninformed suggestions that this bulb is somehow difficult to find, expensive, or otherwise a problem. It’s not. You can access it by taking one magnetic-catch cover off the scanner.
The exotic-sounding “Solux” bulb is a actually a 12v, 50W EIKO MR16 (GU5.3) track light bulb whose only special parameters are that it has a 24 degree throw angle and has a 4700K calibration (so close to daylight). This bulb was not actually developed for the F235 series but was an off-the-shelf (and still current production) art museum track light bulb whose fitting, voltage, and wattage are identical to bulbs in lamps you probably have around your house. So even if you had to wait to buy a 4700k version for a whopping $10-14, you could march down to the local hardware store buy something reasonably close for $2 and be back in business in minutes. Witness:
So what? You ask. LEDs go 10,000 hours instead of 1,000. Why should we put up with a bulb that has to be replaced? One could always point out that 1,000 hours on this machine is 800,000 b/w negatives, which is several times more than anyone outside a professional photojournalist shoots in a lifetime.
But the real reason is color. A lot of early Kodak scanners ran on halogen light sources. Why oh why? It’s all about color. Kodak was always fixated on perfect color in all of its systems, and at the time that the F235 and F235 Plus came out, and even now, you can’t get a Color Rendering Index of 98 with LED. CRI is the measure of how even a spectrum a bulb produces compared to a reference light source, and until recently, LEDs have scored very low because they have holes in their spectral transmission. And if you are fixated on the quality of color through transparency film, the white LEDs that were in play in the Pakon era were nowhere near hitting the barely 90 CRI that LEDs are hitting today.
The other thing is that the F235 system is highly diffused, like a diffuser enlarger. LED light sources are very concentrated and often very unforgiving of other than perfect negatives. If you have ever compared the output from a Nikon LS and a Flextight (or a Sprintscan), you know that diffused light sources don’t multiply the retouching workload later.
So how did LEDs get into the 135 and 335? They were later machines, and as slide shooting went off a cliff, there was little call to maximize color rendition for that application (and even the declining use of film made the slower speed of the 135 completely livable). LED turned out to be fine for negatives (note that the 135 series did not have native chrome capability until a later version of the software, which might be employing its own methods to correct for the light source).
Today you could probably retrofit the 235 with a direct-fit LED bulb (query what might happen if you put the scanner in “dim” mode, though) or pretty any much light source. The machine calibrates itself to the light source on startup
But in general, the F235 Plus is a very fast platform that is easy to clean, does not twist your negatives around curves, and is more suitable to scanning several rolls, then correcting them all at once, then hitting the next set. The one downside is that it does have a fan, and so it is a little louder than a computer. Not 747 jet-engine loud, but still noticeable.
The only sad thing about the F235 Plus is that you might find that your life’s production of negatives zips right through, and after you scan all of the negatives in your family and from some of your friends, there are no more worlds left to scan, er, conquer.
I booted mine up after having it in the box for a while. I ran a few long rolls of film that I forgot about until after I moved. It’s magic. The machine is genius. But now what?
Odi et amo. Quare id faciam fortasse requiris — nescio, sed fieri sentio et excrucior!
The Imacon/Hasselblad Flextight series of scanners is a testament to the power of patents. Each is devilishly simple: negatives get sandwiched between a 400-series stainless sheet and a flexible magnetic sheet, bent around two big wheels, and run between a fluorescent tube at the bottom and a lens assembly and 3-line sensor CCD pointing down from the top. This is true of the cheapest Photo all the way up to the 949.
The variations in Flextights come in the larger models (not the Photo or 343). These have a zoom assembly on the lens that redeploys the CCD pixels to a smaller film width to give 5,700 dpi or more on 35mm film. Almost all Flextights, though, max out at 3200 dpi for 60mm-wide 120 and 220 film, as well as double strips of 35mm.
This article will address the operational differences between a model 343 and a Nikon Coolscan medium-format scanner. The 343 is the only reasonably affordable model that interfaces to FireWire and modern computer operating systems. Some aspects of the 343’s operation are the same as on the X5, the $25,000 champion.
Negative holders. The first fundamental difference between a Flextight and a Nikon is the design of the negative holders. Nikon’s FH-869 carrier uses clamp-down strips to grab the edges of the film and then uses a thumb-operated tightener to tension the film flat. This works most of the time, though it can be tricky to load. The Nikon carriers physically max out at a 6×18 strip, meaning that unless you want a crease in the middle of a frame, you can have a maximum of 3 6×6 frame, 2 6×9 frames, or 4 6×4.5 frames. The alternative (and extremely expensive) FH-869G glass holder sandwiches film between two sheets of glass. Because it does not need to grip the edges of the film, you can scan the entire width of the film. It is not as rough with the 6×18 limit, but it’s still a little risky.
The Flextight holders, by contrast, use magnetic pressure to hold the edges and then bend the entire assembly around a curve to totally flatten the negative at the one spot it is being scanned by the line CCD. Flextight holders generally do not have issues with super-long filmstrips because the ends don’t crunch up against anything (they do hang out of the carrier and/or scanner). Flextight holders, though, because they work best with support on all four sides of the film, are much more format-specific than Nikon holders are. Flextight holders don’t use glass, which also eliminates a dust surface. That said, you cannot get the full width of the film (with all the edge printing) on a Flextight because there would be nothing holding the film.
For most purposes, the Flextight is an easier choice for loading, though not cheap when you have more than the stock holders. The Nikon, though, excels for randomly sized bits of film and anything that is not a traditional 35mm or 120 frame.
Illumination. This is a big difference. The Nikon uses an IR-capable LED light source that can be used by Digital ICE to compute away most dust and scratches. This light source can be adjusted in intensity as necessary to penetrate dense negatives. The Flextight uses a cold-cathode tube (in the 343, it’s basically an off-the-shelf 6w daylight tube) whose intensity is not variable (scanning speed, however, is). The lack of Digital ICE is partially offset by the fact that the Flextight’s tube is a more diffuse light source that tends to cut down on the effects of dust and scratches.
Speed. The Nikon is much faster as a “proofing” machine, particularly with programs like Silverfast and Vuescan, which can preview and scan frames at many times the speed that a Flextight can. The Nikon (and similar scanners) use a positioning motor that addresses a an 18cm area and a stepper motor to advance the film across the scanning head over a 9cm area. Programs like Silverfast highjack the positioning motor to do a quick scan of the whole 6×18 preview area.The Flextight, for its part, moves so slowly that its operation is barely detectable until it hits the end of a scan and ejects the negative holder.
Negative size. There is a huge convenience factor in scanning a 6×12 or 6×17 negative in one pass without stitching multiple scans together (the Nikon has a positioning motor that can address 6×18, but the stepper can only do 6×9). You can set the scanner and pay attention when it kicks the negative holder out at the end. That said, the 343 only handles a maximum of 5 frames of 35mm film in a strip, though with an aftermarket holder, you can do two at a time. The biggest holder available is 58×184, which normally does 3 6×4.5, 3 6×6, or 2 6×9. The 6×4.5 capacity is a bit diminished compared to the Nikon where cameras space more widely (like the Fuji GS and GA cameras).
Focusing. The big difference between a Nikon and a Flextight comes in the focusing. Because negatives can be all over the place on the Nikon, it needs to focus – and you have to arbitrarily pick your focus point. The 343 avoids this by having focusing fixed at the factory (the grown-up Flextights can focus to a degree). As long has your holders have the right thickness of metal, focusing works great and without the clack-clack-clack of Nikon focusing.
Optical Path. The Flextight – like the Pakon 235 and 335 – has the CCD pointing down, through a lens, at the film. This has the effect of eliminating a dust surface and also helps keep things clean. The Nikon (and most negative scanners) turn light 45 degrees via a mirror that, depending on its care and feeding, might get dusty. Might.
Software. This is perhaps one of the weirdest comparisons imaginable. The Flexcolor software that comes with the 343 is about as basic an application as you can imagine. There are very few controls aside from original media type, curves, brightness/contrast, sharpening, and frame size. The most complicated thing about Flexcolor is understanding how to tell the scanner what holder you are using (picking the wrong one can lead to some strange noises).
For the Nikon, since Nikon Scan is deprecated, your choices are Silverfast (which is really powerful but really special in its user interface “innovation”) and Vuescan (virtually free but difficult to control and prone to blown-out, yet somehow specked, highlights). You can, of course, use Nikon Scan with Windows XP (and possibly 7). And speaking of XP, Silverfast 6.5 has wonderful medium-format frame detection with a Nikon scanner and Windows, not so much with Silverfast 8.
Equipped with the ability to recall multiple profiles and settings combinations, Silverfast seems to have much better capability to produce a usable scan without user intervention; Flexcolor seems to anticipate post-processing by the user, not in the least to correct the sharper-than-average dust and scratches. Oddly, Flexcolor defaults to unsharp masking at 250% – is this why the Flextight has such a reputation for crazy sharpness? Not really. The Flextight is no slouch set at 0, and the zero setting will give you a lot less film “grain.” More on this later.
Durability. There are three major aspects to durability. First, are the negative holders going to fall apart? I am fairly convinced at this point that Imacon, Nikon, and Polaroid designed their negative holders to be the weak link in what is otherwise bulletproof hardware. The Nikon FH-869 has little locking barbs that eventually wear out, and the Sprintscan 120 had a medium-format carrier with little locking pins that seemed destined for failure. Luckily, if that happens with the Nikon or Polaroid, you can just get a 3mm AN glass from Focal Point in Florida and use that instead of the top cover. Actually holds film flatter anyway. The Flextights, likewise, seem to have consumable carriers in the sense that the magnetic material will eventually experience fatigue and crack, particularly if handled roughly. Fortunately, there are Chinese replacements on Ebay that work perfectly for 1/3 the price of a Hasselblad replacement.
Second, is there a likely mechanical failure in the future? The CoolScan series of scanners has a phenomenally long service life. The Flextight looks almost too simple to fail.
Finally, what about bulbs? The LED light source in the LS-8000 has a lifespan measured in years of continuous use. The fluorescent tube(s) in a Flextight, provided that you can live with less color correction, cost a couple of bucks apiece and are easily installed by the user.
Relative performance. Over the next few weeks, I plan to run some hard comparative tests, but I’ll share some preliminaries. First, 2000dpi and up is where it becomes clear that your zone-focusing a 58mm on a 6×12 camera leads to some things being in focus and other things not. It remains to be seen how much more useful detail is actually generated going to 3200dpi (Flextight) or 4000dpi (Nikon). As someone who scans primarily TMY in 120, I can observe that the Flextight does not interact with grain quite as obviously as the LS-8000 and that the Flextight is a little more graceful when it comes to dealing with thick highlights. The Nikon in general creates more “grain” (or whatever) there with Silverfast, and Vuescan is very difficult to control in that area – and often ends up being worse. I plan to do some more testing with overexposed TMY and some older, denser negatives on things like Verichrome Pan. One thing that is clear on the Flextight is its ability to deal with not-so-flat negatives and equally resolve grain all the way across the frame.
One concrete comparison. Here is a comparison between a Flextight 343 and a Polaroid SprintScan 120 (this is an easy comparison because it does not require me to cut individual negatives to fit the Nikon carrier). The Polaroid here is being used with its AN glass carrier.
The test is a 320 pixel-high section of a Flextight scan and a 400 pixel-high section of a Polaroid scan (left side of the building). I equalized the visual contrast between the two originals (the Flextight was a bit contrastier out of the gate with its software’s default settings) and then scaled the Flextight image up to 400 and the Polaroid down to 320. N.B. that the Flexcolor software was set to zero sharpening, as was Silverfast for the Polaroid (well, inasmuch as you can really turn off sharpening in Silverfast).
You could make several observations here (yes, the negative is slightly rotated as between scanners, but straightening it would have messed up the resolution…)
First, at native resolution, the Flextight (upper right) is a tiny (and I mean tiny) bit sharper than the Polaroid (lower left).
Second, when scaled down (upper left), the Polaroid benefits from the automatic sharpening-on-resampling that Photoshop does. If you put an unsharp mask on the original Flextight image (upper right), you would get the same thing.
Third, when scaled up, it’s actually hard to see that the Flextight gives up anything to the higher-resolution scanner.
Finally, 4000dpi reacts really poorly with grain on Tri-X, almost as if the grain is at the Nyquist frequency for the scanner. Once the grain gets into the picture, scaling down makes it worse. By contrast, scaling up a 3200 dpi image does not result in even as much grain as a 4000dpi gets at its native resolution.
All of this is generally consistent with more casual comparisons between the 343 and the Nikon. Unlike the comparison between flatbeds with their “fake” bazillion dpi resolution versus real resolutions that are much lower, higher-end dedicated film scanners actually track very close to their nominal resolutions (a Nikon LS medium format scanner has hit 3,900dpi in German tests, for example). So one logical conclusion might be that as between a 3,200 and 4,000 dpi scanner on a relatively coarse-grained film, that last 20% is essentially empty magnification.
My takeaways. This observer would suggest that the Flextight story of superiority is true but not for the “sharpness” / film-flatness reasons that always seem to be bandied around.
First, as against a glass negative carrier, there is zero, zip, zilch to suggest that the Flextight is markedly superior to higher-end dedicated negative scanners. Film only needs to be flat enough that all of the grain is within the depth of field of the scanner lens and that it not be bucked so much that there be visible distortion. Where a scanner autofocuses, it focuses on the middle of the negative, not the edges that may sit lower or higher compared to the focus point. So when you get to the point that a scanner can focus on the center of the frame yet resolve film grain at the top and bottom edge as well as a Flextight does, you’re already where you need to be. Most glass carriers will achieve this, as will a standard carrier where the top latches have been replaced by a sheet of AN glass.
Second, scaling a Flextight scan up to 4000dpi tends to demonstrate that 4000dpi is not a quantum leap in resolution that bicubic interpolation cannot make up. Even 3200dpi on a negative will yield enormous prints (looking above, you will see that even the apparent size difference at 1:1 between 3200 and 4000dpi is tiny. No Flextight except the most recent X1 and X5 (both of which cost as much as cars) has better resolution for medium-format film than the 343 does.
Third, for situations where 3200dpi does not interact badly with film grain, the Flextight actually does better at original and upsized images than a 4000dpi scanner will. Grain aliasing is significantly reduced with T-Max 400. Progress.
Fourth, for oversized scans of 120 film, it is easier to get things flat in a Flextight carrier than a typical glass carrier for a dedicated negative scanner.
Finally, the Flextight is dead-quiet when running, which is a lot more than I can say for the rat-tat-tat-tat of stepper-motor driven units.
Scanners like the LS-8000 have their advantages too.
One, the Nikon scanners can scan negatives far faster, which means a big productivity increase when a big part of your scanning is previewing.
Two, the Nikon has Digital ICE, which can overcome film defects far more severe than the cold cathode light source of a Flextight can overcome. The flip side of this is that the LED light source in a Nikon is both permanent and harsher, which increases the contrast.
Third, glass carriers are far more flexible when it comes to scanning irregular, damaged, or strangely proportioned negatives. Flextight carriers have to be able to hold negatives on three sides to hold the negative flat – and that means that every deviant negative size requires a separate custom-made holder.
Finally, Flextights are not efficient scanners of mounted slides, and with an autofocus scanner, mounted 35mm slides are flat enough that the Flextight is not going to return a massive increase in flatness (more expensive Flextights can deliver far higher resolution, up to 8000dpi though – but for the cost of a higher-end Flextight, you could have a lot of actual drum scans done of your favorite work…).
Conclusion. It might be six one one, a half-dozen of the other. A Flextight is a great scanner for medium-format film, but it is not the most versatile (or in its most affordable forms, the most supported) dedicated film scanner. And if you have an LS-8000 or -9000, a Flextight will not necessarily rock your world.
The Polaroid Sprintscan 120 (and its clone the Microtek Artixscan 120TF) presents an opportunity and two challenges. The opportunity is a 6×24 scanning aperture in even the standard medium format carrier. This allows you to scan really long negative strips or really long negatives. The challenges are keeping film flat and scanning a big frame consistently and conveniently. If you have Silverfast 6.5 (or up), the following is a fairly simple way to address this.
First, get a glass carrier. Forget the one that Polaroid and Microtek sold. It has too many dust surfaces and is so much wider than a 120 filmstrip that you will have misaligned strips (yes, it has little template thingies, but they are fairly pointless). It is also exceptionally difficult to clean. Instead, call up Focal Point in Florida (they actually made Polaroid’s original glass carrier) and have them make you a 3mm anti-newton glass that replaces the top plate of the normal 120 carrier.* If you sub a glass for the original top, the film will smash perfectly flat without needing any bottom glass (if you have used Durst enlargers, this is the same trick where you take a glass carrier and swap out the bottom glass for a negative plate). When you scan, you will get the negative plus up to 2mm of black on all four sides (assuming that the cut ends have some margin around the actual image – and your camera is closer to 50mm frame heights than 55mm).
* Note that if this is cut correctly, it will not be able to slide. To remove the glass, you will need to pry it up using one of the cutouts along the side and a wooden barbecue skewer or other similar tool. Note also that the “dull” side of the glass goes down on the negative (you can see which side is which by looking at a reflection of a light on the glass.
Second, load up the carrier. Make sure that the negative is positioned in the bottom part of the carrier such that the film “margin” is visible in the leading edge and top and bottom edges of the carrier (you may not get top or bottom, depending on your camera – you definitely get both with a Noblex). This will assure that you get your black border (so long as you did not cut your negatives into the image). To better align the negative, drop it into the carrier, turn the carrier so the short dimension is up-and-down,and then shake it until then negative strip squares with the channel in the carrier.
Third, boot up Silverfast. I am sorry to say that this does not work with Vuescan due to Vuescan’s apparent constant desire to reset exposure between frames no matter how you set it. Set Silverfast for the following:
- 16-bit scanning
- 6×9 format
- maximize frame size by hitting Control-A (Intel) or Command-A (Mac) (yes, you will drag in ragged black edges and white margin)
- set batch mode
- disable “Auto on ADR,”
- set color filter to “green” (only if you are scanning b/w)
- set Negafix to Other-Other-Standard-Zero.
Fourth, do a prescan of both frames in the filmstrip function, select both frames, and do a prescan of the first frame. Now hit the “auto exposure” button (looks like a camera shutter). This will set the exposure such to get a black and a white. You only need to do this exposure correction once per roll of film (or even for numerous rolls shot and developed roughly equally). You do not need to do the filmstrip part again ever.
Finally, do a 1-2 batch scan.
At this point, you are covered for future editing and can use Photomerge on Photoshop to merge your work (you will probably have to downsample to 2000 dpi to get any type of speed on most machines). For advanced productivity, pull the scans into Lightroom, enlarge the slide viewer mode until only two frames fit, and then you can see the pieces side by side (note, for the “flipped” scans, be sure you are looking at the correct halves. Exposure bracketing can lead to confusing results at first.
The easiest way to correct the exposure is with the white/black eyedroppers in Photoshop’s Levels function, though Lightroom’s tools are very easy.
If you have to dust spot, use the Edit in Photoshop function in Lightroom and “edit original.” This gives you access to the healing brush and spot healing brush, neither of which is available in Lightroom.
Why does this work?
First, you have eliminated 2 annoying dust surfaces on a bottom glass and created a carrier that will keep things flat and in place. Periodically clean the AN glass. I recommend an Ilford Antistaticum. If you fingerprint the glass, I would recommend using Stoner Chemical’s aerosol Invisible Glass, which is available at auto parts stores (it also works fantastically on household glass). You can wipe it off with a paper towel, and then follow up with your Antistaticum.
Second, using the green channel for b/w means that only one CCD line will be used, making the scan go 3x faster and eliminating the “focus banding” you sometimes see with the Sprintscan and other scanners (this is related in part to “stitching” that occurs when the scanner goes back and forth to position each of the R, G and B lines on the same position). In addition, green should be the sharpest color for non-APO optics used in scanners. If scanners are in fact apochromatic, it doesn’t matter which color you use.
Third, you have set up an exposure regime that will maximize your ability to merge the two halves (or really, left 3/4 and right 1/4) of the 6×12 image. You can do most of the other steps with Vuescan, but you really have to go to heroics to lock it down.
Finally, you have used overlapping frames to get around the limitations of using the Polaroid with a Silverfast program that does not respect the hardware framing and is not easily adjusted.
Hopefully, that helps.
Disclaimer: all of this is at your own risk.