Konica Hexanon 35/2.0 L/UC Hexanons

uc35f2-1
The above is #0000 of the UC; Fujisawa-Shoukai (which commissioned the lenses) gave me explicit permission, back to 2001, to use this product picture for non-commercial use. This isn’t a commercial site, and F-S is gone from this earth, so here we are!

This is an article originally written in 2001; with a lot of updates.

How did these things get started?

The former Fujisawa-Shoukai had quite a bit of pull over Konica. Recall that by 1992, Konica had made what was seen as its last serious film camera, the Hexar AF, with its legendary 35mm f/2 lens. F-S, as we will call it here, commissioned in 1996 a run of Hexar lenses in Leica thread mound (LTM). This was long before the what people in the U.S. called a “rangefinder renaissance;” in fact at the time, very little in LTM was being produced in Japan, with the exception of the Avenon/Kobalux 21mm and 28mm lenses.

The first product of this program was the 35mm f/2L Hexanon, which looked like this:

IMG_0644_2.jpg

This lens is simply a clone of the Hexar AF lens, right down having the same filter size. The coatings look identical, which is not a surprise. Consistent with some other contemporaneous LTM products, it did not have a focusing tab. On close inspection, the scalloped focusing ring looks like that on a Canon 35mm f/2 rangefinder lens, or more contemporaneously, the 21mm Avenon/Kobalux lens. The chrome finishing on an alloy body is reminiscent of modern-day ZM lenses. None of this, of course, will disabuse you of the notion that the Japanese lens production industry revolves around common suppliers. This lens shipped with a black flared lens hood (no vents) and a bright sandblasted chrome “Hexanon” lens cap that fit over the hood.

F-S would then go on to commission the 50/2.4L (collapsible) and 60/1.2L Hexanon lenses. The latter is famously expensive now; I have an email from F-S where it was 178,000 yen (about $1,400). The 50/2.4 will get its own article here.

In 2000, around the time that Avenon was re-releasing its 21mm and 28mm lenses as “millennium” models, F-S had another run of the 35/2 made. These were at least superficially different from the silver ones:

  • At the time, black paint was all the rage, so the lens was executed in gloss black enamel and brass. The enamel in the engravings is almost exactly the Leica color scheme.
  • The filter size decreased to 43mm, the aperture ring moved back, and the focusing ring thinned out to give the impression of “compactness” and justifying the “ultra compact” – UC designation that was historic to some Konica SLR lenses.
  • The focusing mechanism changed to a tab (which helped justify the thinner focusing ring and lighter action).
  • The coatings changed to a purplish red to help support the notion of “ultra-coating.” As you might know, multicoating can be customized for color.

The close-focus distance (what would be the third leg of a UC designation) and focusing rate of the helicoid (0.9m to ∞ in about 1/4 turn) and overall length did not change. The new lens was priced at 144,000 yen, which in dollars would have put it at just under the cost of a clean used 35/2 Summicron v.4 (at the time, these ran from about $700-1,200) and about half of what a Leica 35mm Summicron-M ASPH would cost.

Handling versus Leica lenses

Since both of these are optically identical, it might make more sense to discuss the ways in which these are similar to, or different from, the vaunted Summicron v4 King of Bokeh License to Print Money®. They are both like the Leica version but in different ways.

The UC has the same smooth tab-based focusing as the Summicron. It is very smooth and fluid. That said, the aperture ring is very “frictiony.”

The original L has a focusing feel a lot like a Canon RF lens, owing to the similar focusing ring, which has more drag and no tab. The aperture ring, however, has the same “ball-bearing-detent” feel as the Leica.

The overall length of all three lenses is similar, though as noted above, there is something of an illusion that the Leica and UC are smaller than the L.

Optics

The Konica lens, like the Hexar lens it was based on, is a clone of the 3.5cm f/1.8 Nikkor rangefinder lens, but for all practical purposes, the Hexanon is the same lens as the Summicron 4. As you can see, there is a very smooth falloff from center-to-edge wide open and pretty much eye burning sharpness at f/5.6,

Whoah. That looks familiar! Below is the Leica 35/2 v4 as shown in Puts, Leica M-Lenses, their soul and secrets (official Leica publication). Except the Summicron’s optimum aperture is a stop slower.

On interchangeable-lens bodies, all three lenses have the same focus shift behavior, requiring a slight intentional back-focus at f/2 and front focus up to f/5.6. It’s not like on a 50 Sonnar, but it’s there.

Should I?

The original chrome version is a lovely lens and a nice match for chrome Leicas, at about 1/3 the price of a chrome Summicron v4 (yes, they exist…). If you like Canon lenses, you’ll be right at home with it. On the other hand, the UC version is smooth and sexy but getting to be as expensive as a 35/2 Summicron ASPH, which is actually a better lens.

Advertisements

Leica Monochrom Typ 246 x PC-Nikkor 28mm f/3.5

People understand why tilt lenses exist – making super-expensive Canon DSLRs produce pictures that look like they were taken with a toy camera (or making the subjects themselves look like toys). No one knows, though, why shift lenses were once a thing. It’s all a matter of perspective.

The truth, from a certain point of view

Photography always has (and always will) present this problem: needing to fit a large object into a frame that is constrained by lens focal length. Conceivably, with a superwide lens you could, but then you end up with a lot of extra dead space in the frame. Which defeats the purpose of using large film or sensors.

Solution?

If you want to get the whole thing in frame with the minimum number of steps or expenditure of time and money, your choices are to use a really wide-angle lens, tilt a camera with a more moderate wide-angle up, learn to fly. All of these  are sub-optimal. First, the really wide-angle lens is great in that you can capture the top of the object without tilting the camera. The problem is that making an engaging photo with a wideangle is actually extremely difficult – because it tends to shrink everything. Depending on how the sun is, it also stands a better chance of capturing the photographer’s shadow. Second, tilting up a camera with a more moderate wide-angle lens “up” turns rectangular buildings into trapezoids, which works for some pictures but definitely not others. Finally, learning to fly is difficult. But watch enough Pink Floyd concert films, toke up with the ghost of Tom Petty, or study Keith Moon’s hotel swims, and you might.

Do you skew too?

Assuming you are reasonably competent, you can correct perspective using software, by skewing the canvas. This is a take on the old practice of tilting the paper easel with an enlarger. This was a limited-use technique, generally practiced by people who could not use view cameras and tripods but still had to come up with a presentable representation of a tall object.  There were (and substantially still are) three issues here: crop, depth of focus, and dis-proportion. First, the crop came from the fact that tilting an easel meant that the projected image was trapezoidal and not rectangular, meaning that from the get-go, it had to be enlarged until the paper was filed. This still happens with digital. Second, the depth of focus issue is related to the fact that enlarging lenses are designed to project to a surface that is a uniform distance from the enlarger (i.e., projecting one flat field onto another). You would have to stop down the lens severely, or use a bigger focal length, which in turn required a taller enlarger column to maintain the same magnification.

The digitization of perspective correction uses computation to project the flat image onto a skewed plane, using interpolation and unsharp masking. This solves the apparent sharpness issue, but it degrades quality. Finally, dis-proportion comes from the fact that straightening converging verticals starts from a place where certain details are already compressed via the original perspective. For example, looking up at a tall building from a short distance, the windows look shorter (top to bottom) than they would if you were looking straight at the window from its own level.

So even when you manage to re-skew the canvas/field/whatever, you now have an image that is too “fat.” On enlarging paper, you would be forced to make a cylindrical correction to the negative (which is not practical in real life). On digital, there are specific transformations that you can perform to correct (for example, the adjustable ratios on DxO Perspective and Lightroom.

So skewing is a useful technique, but it’s still better to skew less.

Shifting your thinking: the mirror years

View cameras have used the concept of shift and tilt to adjust for situations where the viewpoint was wrong (shift) or depth of field was insufficient (tilt). Raising the front standard of a bellows-type plate camera was always standard practice to improve photographs of tall objects, especially in an era where wideangle lenses were not super-wide by today’s standards. Lens board movements were easy to achieve because there was always some distance between the lens mount and film plane in which to insert a mechanism to raise the lens relative to the film. And because there is no control linkage between the lens/shutter and the rest of the camera, you’re not losing automation. You never had any!

But these cameras were not small. The smallest bellows-type camera with lens movement features was the Graflex Century Graphic, a delightful 6×9 press-style camera. On many bellows-type cameras, though, there was no real provision for using a shifting viewfinder. The press-style cameras had wire-frame finders that provided a rough guide, but nothing could tell you whether the lens was actually level outside a gridded ground glass. Later in the game, the Silvestri H would present as the first camera with automatic finder shift, as well as a visible bubble level. Linhof used a permanently-shifted lens assembly (and viewfinder) on the Technorama PC series, and Horseman provided shifted viewfinder masks for the SW612P, though these were available only as “all the way up/down” or “all the way left/right.”

The shift mechanism, though, could not be adapted to SLRs easily due to three constraints:

  • Most SLRs lenses are retrofocal – meaning that the nodal point of the lens is more than the stated focal length from the imaging plane. It takes a ton of retrofocus to insert a shift mechanism into an interchangeable lens that has to focus past a mirror box. More retrofocus means bigger lenses So when perspective control lenses began to appear for SLRs (35mm and 6×6), they were huge. Maybe not huge by today’s standards, but a 72mm filter size is pretty big for a Nikon SLR whose normal filter size is 52mm.
  • To achieve an image circle large enough to allow shift around what is normally a 24x36mm image circle, it is necessary to use a wide field lens and stop it down severely (illumination with almost any lens becomes more uniform as it is stopped down).
  • Most cameras can only meter PC lenses correctly in their center position, wide-open. Where shift mechanisms eliminate direct aperture linkages to the camera, you’re back to the 1950s in metering and focusing – then shifting – then manually stopping down to shoot (now corrected by the use of electronic aperture units in $2K plus modern Nikon and Canon PC lenses).

Viewing is not a lot of fun with 35mm SLRs; when stopped down, PC lenses black out focusing aids (like split prisms and microprisms) and still require careful framing to keep parallel lines parallel. So you need a bright screen – plus a grid or electronic level. Suffice it to say, a lot of people regard perspective control to be a deliberative, on-tripod exercise when it comes to SLRs and DSLRs. Maybe it’s not.

A new perspective: full frame mirrorless?

So here come mirrorless cameras (well, they came a while ago). Now you can fit any lens ever made to any mirrorless body. The optical results may vary, but at least physically, they fit.

— Getting the lens in place

So I grabbed the nearest available PC lens I could find, which was a 28/3.5 PC-Nikkor. Not AI, not even from this century. Released in 1980, it is a beast. I plugged this into a Konica AR body to Nikon lens adapter, and from there into a Imagist Konica lens to Leica body adapter. Why all these kludgy adapters? The answer is actually pretty simple: the Imagist has the correct tolerance to make infinity infinity, and the Konica adapter does the same. This is not a small consideration where you might be zone focusing a lens.

Then I plugged this kludgefest into a Leica M typ 246 (the Monochrom). Because why not start with the OG of mirrorless camera platforms? Of course, you can’t use a rangefinder with a Nikon SLR lens, so I plugged in an Olympus EVF-2 (which is the ‘generic’ version of the Leica EVF-2.

— Getting it to work

The Nikkor has two aperture rings. One is the preset, where you set your target aperture. The other is the open/close ring, which goes from wide-open to where the preset ring is set.

I turned on focus peaking and set the preset for f/22 and the open/close for f/3.5. I was able to establish that infinity was correct.

Next, I stopped down the lens (both rings to f/22), expecting that just as on an SLR, the EVF would black out. Worked perfectly.

I hit the “info” button to get the digital level, and it was off to the races. The lens has a rotation and a shift.

— But how well does it actually work?

The functionality is actually surprisingly good. On a Leica, it’s just stick the camera in A, stop the lens down to f/16 and 22, and point and shoot.

The digital level obviates the use of a tripod or a grid focusing screen, and you really just frame, turn the shift knob until the perspective looks right, and there you go. There are a couple of limits

You can’t use maximum shift along the long side of the film, but the only penalty is a little bit of a tiny shadow in the corner. And that’s with a full-thickness 72mm B+W contrast filter. You get 11mm shift up and down (i.e., along the short dimension of the firm) and 8mm left and right (nominally; as I stated, you can get away with more under some circumstances).

Aside from that, there are some minor annoyances like making sure you haven’t knocked the aperture ring off the shooting aperture. Or knocking the focus out of position (it’s a very short throw…).

BUT THE DUST! And here is the rub – shooting at f/16 and f/22 brings out every dust spot on your lens. Normally, you would shoot a Leica M at f/5.6, f/8 max. But PC lenses – like their medium and large format cousins – are designed to max out their frame coverage at very small openings. So I had never cleaned the sensor on my M246 in four years, and I got to spend an evening working on a hateful task that included swabs and ethanol and bulbs and the Ricoh orange lollipop sensor cleaner.

— And how sharp?

Very. Diffraction is supposed to start becoming visible at f/11 on this combination at 1:1, with it showing up in prints at f/22.

Pictures stand up to the old 1:1 test, except in the corners where you have over-shifted along the long side. Recall that in lot of situations, two of the last bits of corner are usually sky, where a tiny amount of blur is not going to be of any moment.

How well this will work on a color-capable camera is a question, especially since lateral color would come out. But right now, this is posing the most acute threat to 6×4.5 cameras loaded with TMY.

Browniegate: In hoc signo vinces

20190103_204435.jpgWell, you have that day where you feel like you want to step off the film train. Oddly enough, it was not because some digital sensor came along with massive resolution, or film hit $8 a roll, or the EU outlawed developing chemicals. Or you name the calamity.

Here, it was the product of well-meaning backward-compatibility. I had this thought as I was looking at a roll of TMY shot with a Silvestri H that probably cost $10,000 new. It uses standard-style roll backs made by Mamiya that are bulletproof and have nicely spaced frames. The pictures themselves were sharp, undistorted, and perspective-corrected. But they were ruined for optical printing because backing paper numbers – useful only to people with red-window cameras – transferred onto the emulsion. I felt like Constantine the Great, kinda. I looked in the sky, and the sign of “Kodak 14” was shining down on me. In this sign you will [be] conquere[d].

Browniegate (let’s give it a good name, at least) occurred because Kodak had an issue with backing paper on 120 film (this affected some lots made between 2-4 years ago). Environmental conditions could cause backing paper frame numbers to transfer onto the emulsion of the film and show up in low-density areas, especially the sky. Lomographers probably loved this. Everyone else, not so much.

Kodak handled this reasonably well (but not optimally),* and it has been very good about replacing defective film. Given that they had few choices for backing paper (1-2 suppliers of this worldwide) and that they probably couldn’t anticipate the full range of environmental abuse film might experience in storage, I cut them some slack. We all accept that any time we use film, we could end up with no pictures. Grab the fix instead of the developer. Leave a rear lens cap on. We’ve all been there. But the backing paper thing is not within user control. Unlike the bad roll of film that comes up every hundred thousand rolls of film, the frame number thing hits more often. It’s not like lightning. It’s more like a tornado ripping through farm country.

The what is one thing. But the why is another. Laying aside bad material choices by the backing paper manufacturer, the underlying issue is that frame numbers on paper backing were last needed for serious cameras in the 1950s (the Super Ikonta C may be the last one), and the ruby-window method of seeing what frame you are on persists mainly in (1) Brownie cameras whose design goes back to 1895; (2) Lomography-oriented products; and (3) current large-format roll holders that should know better. There is actually no excuse for this last category, since there is no patent for frame counters that is still valid, and roll backs are only made in LCCs now. It’s the support of these older and cheaper cameras that requires frame numbers past #1 – and in a weird way, the shadow of the 19th century is still causing problems in the 21st.

The bigger question this begs is this: if backward compatibility is a significant part of the business case for 120, does that mean that when the ruby-window market fizzles out, it will take serious medium-format photography with it? Best not to think about that.

*By not optimally, it would be nice to have a new catalogue number for new backing paper, so that people trying to buy film from B&H for critical use would not get stuck with old product – like I did when I was going to Singapore, bought 20 rolls of TMY in March 2019, got 158xxx TMY, and had backing number transfers on every roll of film, with up to 75% of 6×4.5 frames being affected on any given roll. Or maybe use a laminated paper that has punched-out numbers and not printed ones. 

 

Bokeh gonna burn your eyes

bokehlicious.jpg

Mark my words (as if they are that important): the future will not look kindly on the gimmick-bokeh that dominates the aesthetic of 2000s photography, just as we get a chuckle out of 1970s pictures with excessive sunsets, lens flare, and nipples. People yet to be born will wonder why photographers in the 2000s took insanely expensive lenses, better than any ever designed to date – and cheaper – and then used them to simulate astigmatism, near-sightedness, and macular degeneration. The most charitable explanation will be that photographers were trying to show solidarity with the visually impaired.

The buzzword (today) is subject isolation. But why are we isolating a subject from its context? What’s wrong with the context? Are we creating millions of pictures of the same peoples’ faces with nothing else in the shot? Are they people or products?

In the present, good composition can still be shot at f/16. Small apertures are also obligatory on larger-format film cameras because a lot of those lenses have serious light and sharpness falloff at the edges at their maximum apertures, especially with the focus at infinity. Nobody buys a $3,000+ 6×12 camera to get the types of pictures you could see from a $250 Lomo Belair.

There is a reason that early autoexposure SLRs used shutter priority: if you had to make a choice for what would be in focus, it would be your subject; if you had light to spare, you’d want use as small an aperture as your lowest desired shutter speed would support. And that thinking underpins historic picture-making. Intentionally shallow depth of field is not a feature of most of the world’s most iconic images. Arnold Newman did not need shallow depth of field to shoot Stravinsky. Eugene Smith did not shoot Spanish policemen as an exercise in subject isolation. And David Douglas Duncan captured every crease in the face of an exasperated Marine captain. How about Richard Avedon with his Rollei and every celebrity on earth? There are exceptions, but throughout history, wide apertures were primarily driven by a need to keep shutter speeds high enough to avoid blur. Light constraints are not such a consideration when ISO 6400 is a thing on digital cameras.

The worst part about bokeh, and the one no one talks about, is that it can actually be unpleasant by causing eyestrain (or maybe brain-strain). In many ways, a human eye – if you looked at the whole image projected on the retina at once – resembles a cheap Lomo-type lens: sharp in the middle (the fovea) and blurry at the edges. It even has a complete blind spot (the punctum caecum). The eye has a slow aperture, estimated by some to be f/2.8. But, dammit, everything looks like it is in focus. That’s because your eyes are continuously focusing on whatever you are looking at. Your brain is continuously piecing together fragmentary information (the blind spot thing is incredible – vertebrate biology beat Adobe to content-aware fill by about 500 million years). The end result is what looks (perceptually) like a scene where everywhere you look, things are in focus. It’s actually pretty amazing that this works.

In every photo, there is a compression of three dimensions into two. More depth of field allows your eyes to wander and allows you to process the scene fairly normally. When you look at bokehlicious pictures, definition is concentrated on one object (and often just a piece of it). You might find your eyes (or visual perception) constantly trying to focus on other aspects of the scene besides the subject. But neither your eyes nor computational photography can remove extreme artifacts once they are “flattened.”

Scroll back up to the picture at the top. Same composition, shot at f/8 and f/1.5 with a 50mm ZM Sonnar. Look left and look right. On the left, you can look almost anywhere n the scene and see whatever visual element you want to scrutinize, at at least some level of detail. On the right, you are always and forever staring into the Contractor Ring®. You can try to focus on other elements of the picture on the right, but the information simply is not there. Need an aspirin?

And it can be fatiguing, more so that the aesthetic is played out and that anyone with an iPhone X can play the game. Pictures with ultra-shallow DOF don’t look natural. They are great every once in a while, or if you need a 75/1.4 Summilux to get an otherwise-impossible shot, but otherwise, get off your ass and move the camera (or your subject) into a position with a reasonable background.

# # # # #

Punching your way into film identification

So the usual has happened. You have a pile of undeveloped film. Maybe you didn’t note the processing (N, N+1, N+2) or maybe it’s bulk loaded film that has no label on the cassette (for example, you might find it very easy to confuse Ilford Pan F Plus 50 with Ultrafine Xtreme 400). Or you can’t remember what order you shot film. Of course, the difficulty is that unless you somehow identify the film canisters, you’ll mix things up. And even then, once film is out of the canister and developed, there is rarely a persistent indicator of what happened. Data backs for 35mm cameras are something of a pain, they don’t record everything, and almost all of them are going extinct in 2018. Buy a Nikon F6 that records exif data? It’s a little late in the game for that.

The solution: the $5 arts & crafts hole punch and a $5 film-leader puller

One perhaps non-obvious solution is to permanently mark the film leader. You obviously can’t do this with a pen because the writeable part of the film will get washed off in processing.

The most effective way I have found to achieve this is with craft hole punches, which come in various hole sizes (1/16, 1/8, and 1/4″ – 1.5mm, 3mm, or 6mm), as well as a variety of shapes (round, hearts, stars, diamonds). As long as you make the marks on a part of the leader that will not be discarded (so not the long thin tongue part on commercially loaded film), these will survive the development process and won’t go anywhere until you snip them off. The uses are numerous:

— Bulk-loaded film: If you punch the leaders with a distinctive mark, you can avoid mistaking one type of film for another. For example, where it is very easy to confuse bulk-loaded Ultrafine Xtreme 400 and Ilford Pan F Plus, punching the Ultrafine with a heart will help you avoid mixing things up when loading your camera.

— Processing regime: If you are going to push-process film, punching the leader with a mark (such as a star) either before or after exposure will help prevent you from mixing up your N, N+1, and N+2 films. If you need to, you can use a leader-retriever to pull the leader out and mark it after fully rewinding.

— Order the film is shot: If you can’t imprint the first frame of a roll with a data back, you can use a number of punches to signify the order in which the roll is shot. You can even do this before you shoot the film.

— Camera or lens used: no data back records focal length, and camera bodies of the same make – assuming they even have a film-gate cutout for identification – use the same cutout (for example, Konica bodies usually have a triangle notched into the edge of each frame).

# # # # #

 

DxO One: Liberté, egalité, insanité

2001composite

My first DxO One (version 1, $125 new on clearance) bricked when I upgraded the firmware. Left with an inert toy while Amazon dug up another one to send me, I could not help but play with the dead one. I flew it up to the water/ice dispenser on the refrigerator. “Open the pod bay doors, HAL.” Nothing.  The DxO One rotated 180 degrees so that it could eject the micro SD card into the…

“Dad, what are you doing?”

“Uh… nothing.”

But seriously, the DxO One is one of strangest and most wonderful cameras to come out of France, or anywhere. Here’s why.

Sensor. The camera uses a 20Mp,  1″ Backside Illuminated (BSI) sensor (3x or so crop factor) made by Sony, the same one as on the RX100III. Two things make this a standout here: first, BSI sensors are quite good – meaning this returns results almost on par with the Sony a6300’s copper-wire conventional sensor. Second, almost all sensors perform equally at base ISO. In the software design, DxO biases the camera toward lower ISOs and wider apertures (which makes sense, since a 1″ sensor starts diffracting at f/5.6).

How does this compare to an iPhone XS sensor? Well, it’s almost 70% more resolution and 6.7x times the surface area (116mm² x 17.30mm²). Do the math. All the computations in the Apple world can’t make up for this type of difference in displacement. This does expose the genius of portrait mode, though – because not even a 1″ sensor is big enough to have easy-to-achieve subject isolation.

The sensor is used for contrast-detect AF (with face priority).

Lens. 32mm equivalent, f/1.8-11 aperture, six groups, six elements, with some of the weirdest aspherical shapes imaginable. It’s very tough to find a lens on a compact camera that approximates a 35/1.8. But here you are.

Shot with DXO ONE Camera

Far from being telecentric with an expected “folded optics” path, the DxO One uses the cellphone method with almost zero distance between the rearmost element  and the sensor. The rearmost element looks like a brassiere. Like this:

one-design-technology.png

The lens is happiest at larger apertures (f/2-f/4).

Storage. The DxO One accepts standard MicroSD cards. I was able to test up to 128Gb cards  (Samsung EVO Plus), and it is able to read and write to them with no issues.

Power. Power comes from an internal battery but can also be fed directly from a micro USB cable. The battery takes about two hours to charge and does about 200 shots. Version 2 of the camera has a removable back door to accommodate an external battery pack DxO no longer sells. You also lose the free software (see below).

Viewfinder. Your choice of two. You can plug the camera into your iPhone, where you can use the DxO One application and the phone screen as a viewfinder. Alternatively, version 3.3 of the camera firmware turns the little OLED screen on the back into a square contour viewfinder, good enough at least to frame the middle square of the picture – and surprisingly good at estimating a level angle for the camera. You could also split the difference with a Lightning extension cord.

Connectivity. The camera was originally designed to connect via the Lightning port, but DxO enabled the onboard WiFi so that now you can use the application on the phone and control the camera (including view-finding) without a physical connection. The DxO One can also connect to your phone via your home wireless network. WiFi operation – no matter what the camera or phone – is not as much fun as it first sounds – which is why the DxO product is more flexible than Sony’s wireless-only solutions.

Shot with DXO ONE Camera

Software. In terms of the camera’s software, all the magic is under the hood. The camera switches on by sliding open the front cover (slide it all the way, and the Lightning connector will erect itself). There is a two-stage shutter button on the top and you can swipe up and down on the OLED to switch between controls and viewfinder and left and right to toggle photo and video. The camera stays on the exposure mode last selected from the DxO software on the iPhone.

The DxO One phone app is well-done and responsive. You can use it to frame, shoot the picture, and control what you want. Features include:

  • JPG, Raw, and Super Raw (stacked) exposure modes.
  • Single-shot, timer, and time-lapse settings
  • Flash settings
  • Subject modes and the usual PSAM modes.
  • Program shift (between equivalent exposures with different shutter speeds or apertures).
  • Single AF, Continuous AF, On-Demand AF, and Manual focus (manual includes an automatic hyperfocal calculation if desired).
  • Matrix, centerweighted, or spot metering.
  • Grid compositional overlay.
  • “Lighting,” which is like a mini HDR compressor for JPGs.

You can also look through the exposures on the camera/card and move them to your phone as desired. As noted above, though, you do need to initiate wireless connections with the camera connected.

If you get a version 1 camera, new, it also comes with DxO Optics Pro 10 Elite (now Photo Lab 1 Elite) and DxO Filmpack Elite. But you have to be able to document that you are the original owner of the camera. Both of these can run as standalones or can be external editors for Lightroom. Photo Lab 1 is also capable of replacing Lightroom.

If you get version 2, you’re out of luck. But you do get a 4gb SD card and the detachable back door for that battery pack.

And either way, you do get DxO OpticsPro 10 for DxO One, which gives you a nice imaging/digital asset manager that can composite SuperRaw files. SuperRaw is a stack of four successive (and extremely rapid) exposures that cancel out high ISO noise.

And if you don’t like any of that, the DxO One outputs normal DNG files that you can simply edit to taste in Lightroom. There is a Lightroom profile for the camera’s minimal residual distortion.

Ergonomics. This is the one place where things are sketchy. It’s hard to hold onto a small ovoid object, especially one with a button on the top. I would highly recommend a wrist strap.

Upshot. Maybe not the most compelling camera at $700 plus when it came out, but now that it is a sixth of that and still a lot of fun to shoot, go for it!

Shot with DXO ONE Camera

 

Digital photography – really photography?

Can you believe that Pullman is used for “bus” in parts of Europe? Jeez, I thought that a pullman was inherently a rail vehicle. How dare usages change! Somebody get on the Rail Transport User’s Group (RTUG) and post a philosophy question. We need to take the name Pullman back!

But really, how many hours of the waning days of old men’s lives have been wasted arguing about whether newfangled cameras grabbing electrons can be “photography” as an art or a craft? How many should? Would that time be better spent arguing about cars, finishing, guns, boats, or wristwatches?

You can spin off into the etymological argument: electrons aren’t photo + graphy because the light is not making the image directly. Or there is transformation. Or something. Reliance on ancient Greek is misguided. Photography was a neologism invented in the 19th century. It was not true to the ancient Greek then (no thing was – or is – drawing or scratching in the sense of γραφή); the 19th-century term was just an arbitrary description for what happens when light was the prime mover in the imaging process. And we have legions of words whose meanings have deviated far from what they would have meant to Greeks or Romans – or even what they meant the first time the terms were coined. Hence the weird crossover between autobuses and rail cars.

Is photography art? If you believe that, look at what the art world says. It’s all photography. That’s what museums call anything that is an image captured by a machine (film or digital) where the substantive content originates in the original image recording process. The only distinction made (and only sometimes) is for pre-silver-halide work, and even then only if it is one of the more obviously exotic (Daguerrotypes, Cyanotypes, and other things that deviate from the look of optical or inkjet paper). Odd that they don’t care what system originated the image; they only care about the medium it is ultimately expressed. Just like other things you see on the walls are “oil,” “watercolor,” “pastel,” “drawing,” etc. A “dye diffusion print” does not differentiate between originating on negatives or a Handycam.

Or maybe it’s not odd. Art requires a visible or tangible expression, and in the end, that is all that counts.