Lytro Illum vs Light L16: Computationism

Computational photography

The term “computational photography” sounds fancy, but it is a fairly tame concept: using a computer to substitute something that is missing from a conventional optical treatment. Little bits and pieces of computational photography exist in the panoramic feature of your digital camera, any type of in-camera HDR, in-camera tilt-shift (“toy” or “perspective correction” features), and even the Portrait mode on your iPhone. More extensive computational photography is illusive; there are only two cameras that could actually do it. One, the Light L16, was the subject of a previous article. The other is the Lytro. Neither is still made.

The Lytro Illum (“Il-lume”)

With all the hates going around these days, we will focus on something political and judgmental nonetheless: the light field camera, here, the Lytro Illum (pronounced illume, not Ilium like in a Kurt Vonnegut novel). I’ve read now almost every review of the Lytro products, and it seems that people universally get super judgy on this thing. To be fair, it is a weird device – and the tech behind it is probably unfathomable to most people who use it. It’s more like a strange alien technology.

If you have to imagine the form factor, imagine a Sony a6000 that grew up to be really, really big. To put the Illum in relatable terms, the lens is as big as a 500mm mirror lens, is about as light, and it feels like someone has attached an iPhone X to the back. Maybe it’s like a Hasselblad X1 lookalike.

The Illum has some shortcomings and unforced errors in handling that no doubt affected its popularity

  • The lack of an EVF can make it challenging to use in the type of light at which the sensor excels – bright. Eye-level would have been far better.
  • The flash shoe is set forward to maintain the center of gravity when shooting with the extremely rare dedicated flash for this unit. Unfortunately, it does not have the position or spring-loaded security that would make it easy to use with an optical accessory viewfinder.
  • The “focus” and zoom rings are by wire and have the lackluster, disconnected feel of the hydraulic steering on a 1981 Cadillac Brougham.
  • The battery is a complete oddball trapezoid. Because f*ck you. We’re Tech Bros.®

But when actually using the camera, you realize that it probably evoked a visceral negative reaction from reviewers because in many cases, you feel like Luke Skywalker shooting with the blast visor down. You might shoot this without really looking, and fixating on “AF” might slow you down – or actually degrade the results. This is largely a shoot-first and finalize-later type of camera, which is something that makes film people go crazy. It actually made a lot of digital people go crazy too — because although the camera could show you previews, the big stuff is for desktop software.

The limits of plenoptia. The Illum is a plenoptic camera – meaning that it uses an array of microlenses to capture both luminance and directional information about light. Cameras like this capture a phenomenal amount of data and can basically reconstruct a point of view from it.

The shortcoming of plenoptic cameras is directly related to the advantage. For a sensor of a given size, you lose quite a bit of spatial resolution (megapixels) by recording all of that directional information. That means that the Illum starts with a 1″ sensor with a big pixel count (40mp) but computes it down to 2540×1634 (4Mp) at any given distance. This was seen as absurdly low resolution when this camera came out, but given what most people do with photography today, it’s in the ballpark (and Adobe Super Resolution can increase the apparent resolution).

Below you can see how the system can change the focus, the depth of field, and even to a degree, the perspective.

Enhance 224 to 176. Enhance, stop. Move in, stop. Pull out, track right, stop. Center in, pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute, go right, stop. Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.

Now for some real fun, the Lytro desktop software can output left-right stereo cards or red-blue anaglyphs from a single exposure.

Optics. The Illum lens is interesting – it’s a 30-250mm equivalent with power zoom and a fixed f/2 aperture. Contrary to what you might believe, it does not do all focusing with computational power. The lens essentially operates in four ways:

  • No focusing – works better at middle to long distances. Focus later.
  • Autofocus or manual focus – lock in at a certain point, which improves optical quality around that point but still allows plenty of margin.
  • Hyperfocal focusing – you can enable this on the menu, and the lens will focus to a point that maximizes depth of field/focusability.
  • Infinity focusing – when you need to make sure the distant future is perfectly sharp.

The good and the bad. The Illum has some very positive features:

  • Sharp pictures of things at reasonable distances
  • Focusing pretty much as close as the surface of the lens
  • Arbitrary focusing after the fact (and enough onboard imaging horsepower to preview it on the camera)
  • A really intuitive touchscreen system where to see past pictures, you just swipe right
  • Plenty of depth mapping information
  • Decent ergonomics
  • Good battery life and the ability to charge off USB 3.0 (the Micro B, i.e., flat figure-8)
  • Reasonable hard buttons for AE, AF, and hyperfocal
  • Control wheels front and back
  • A 72mm filter thread
  • Light weight
  • Flash system actually works. It is a Viltrox JY68DL, it’s TTL (somehow), and how exactly it works for exposure is unclear. It appears to be a scene-averaging system.

The software, for its part, has a lot of interesting capabilities beside the usual Lightroom develop-style basics. These include:

  • The ability to simulate apertures from f/1 (shallower DOF than the lens) to f/22
  • Moving the focus point
  • Applying “tilt” (miniature effects)
  • Generating lenticular files, stereo pairs, and red-cyan anaglyphs from single frames
  • Exporting depth maps
  • Changing the perspective of pictures after the fact
  • Generating animations
  • Compatibility at least to MacOS 10.14 and Windows 11 (yes, the software runs in a VM, too)

And the not super good:

  • Due to the very short effective base length, you will really want something within a meter or two of the camera if you are shooting for 3D output. Generally, with 3D, your base length should bear a relationship to distance of the closest subject in frame.
  • It’s difficult to tell whether this camera has a low-pass filter, or whether the algorithms break down at long distance, but distant details can be underwhelming. This is also affected by focus. But the point of a camera with adjustable focus is not really to shoot things where the major interest is at infinity.
  • 2D resolution is not huge. This is not a big deal with Adobe magic, provided that your key details are not too small (e.g. limestone texture on distant building). For anaglyph 3D, absolute resolution in the file is a lot less critical. Resolution is also not a big deal for most internet-related uses.
  • The camera’s zoom and focus rings’ functions can be switched, but their direction cannot be reversed.
  • Not easy to use the massive 4″ LCD in bright sun. Bring your Zacuto finder.

Conclusion: why did it fail?

I would joke that like Tinkerbell, it needed people to believe in it. And they ultimately did: computational photography (in a relatively “lite” form) lives on in cell phones (Portrait on an iPhone, for example, is a limited version of the Lytro focus manipulation that makes a depth map from two very conventional Bayer-array cameras).

In a way, it did succeed. A 4mp final end image is sufficient for almost any modern (read: screen-based) use, but to get there, you sacrifice dozens of megapixels of information that is used only to tell light direction before you output a file.

This concept would have revolutionized photography but for one thing: it was never going to be scalable. Sensors in the 1″ form factor maxed at 40 megapixels 10 years ago, and so to get higher 2D resolution would have required (and still requires) a base sensor that does not exist yet. And although there are 100mp+ sensors now, that doesn’t really translate into a lot more resolution post computation – and those sensors that would never fit in a camera this small (or large…). It seems that you have to have a super-telecentric wide-angle lens for this type of photography, and since you are constrained at a single physical aperture, you want it to be big. If this were built around a 36×24 sensor, it would probably be the size of an M1 tank barrel.

Tags: , , ,

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.