Photos: Eric Cheng/Lytro

Single Snapshot: With light-field photography, an image can be focused
after it is taken, as shown here with one of Lytro’s “living pictures.” Just click on a part of the image you want to bring into focus.

Leonardo da Vinci sketched out tanks, helicopters, and mechanical calculators centuries before the first examples were built. Now another of his flights of imagination has finally been realized—an imaging device capable of capturing every optical aspect of the scene before it.


Lytro, a Silicon Valley start‑up, has just launched the world’s first consumer light-field camera, which shoots pictures that can be focused long after they’re captured, either on the camera itself or online. Lytro promises no more blurry subjects, and no shutter lag waiting for the camera’s lens to focus. A software update to the camera, coming soon, will even let you produce 3-D images.


Light-field technology heralds one of the biggest changes to imaging since 1826, when Joseph-Nicéphore Niépce made the first permanent photograph of a scene from nature. A single light-field snapshot can provide photos where focus, exposure, and even depth of field are adjustable after the picture is taken. And that’s just for starters. The next generation of light-field optical wizardry promises ultra-accurate facial-recognition systems, personalized 3-D televisions, and cameras that provide views of the world that are indistinguishable from what you’d see out a window.


But light-field cameras also demand serious computing power, challenge existing assumptions about resolution and image quality, and are forcing manufacturers to rethink standards and usability. Perhaps most important, these cameras require a fundamental shift in the way people think about the creative act of taking a photo.


In his manuscripts on painting, Leonardo wrote, “The air is full of an infinite number of radiant pyramids caused by the objects located in it. These pyramids intersect and interweave without interfering with each other.…The semblance of a body is carried by them as a whole into all parts of the air, and each smallest part receives into itself the image that has been caused.”


Nowadays, scientists and engineers prefer to think in terms of light rays rather than Leonardo’s more poetic “radiant pyramids.” But light-field photography is based precisely on his idea that the light arriving at any point—what he called the “smallest part” of the air—carries all the information necessary to reproduce any view that can be had from that position.


Doesn’t an ordinary camera do that? Not at all. In a conventional digital camera, the light rays hitting each point on the image sensor combine. The sensor records the total intensity of the light rays landing on each point, or photosite, but in the process loses directional information about where the different rays came from. So the best a typical camera can provide is the familiar two-dimensional photograph, which has a fixed point of view and a focus determined entirely by how the lens was set when the photo was snapped.


Light-field photography is far more ambitious. Instead of merely recording the sum of all the light rays falling on each photosite, a light-field camera aims to measure the intensity and direction of every incoming ray. With that information, you can generate not just one but every possible image of whatever is within the camera’s field of view at that moment. For example, a portrait photographer often adjusts the lens of the camera so that the subject’s face is in focus, leaving what’s behind purposefully blurry. Others might want to blur the face and make a tree in the background razor sharp. With light-field photography, you can attain either effect from the very same snapshot.


The information a light-field camera records is, mathematically speaking, part of something that optics specialists call the plenoptic function. This function describes the totality of light rays filling a given region of space at any one moment. It’s a function of five dimensions, because you need three (x, y, and z) to specify the position of each vantage point, plus two more (often denoted θ and φ) for the angle of every incoming ray. 


When measuring light in a region that’s free of any obstructions, you have to keep track of only four dimensions rather than five. Think about it: If you know that a ray isn’t blocked, it’s simple to follow where it goes. Record where it hits one plane (x and y) and the angle at which it hits (θ and φ) and you can work out where it came from and where it’s headed. The same is true for any other ray hitting that plane at any angle. So with just the knowledge of the light crossing a single plane, you can calculate the position and direction of the rays filling the surrounding region, so long as there are no obstructions present. This four-dimensional function is called the light field (hence the term light-field camera).

All this has been known for many years. Back in 1908—the same year he won the Nobel Prize in physics for color photography—the French scientist Gabriel Lippmann invented something he called “the integral camera” [PDF]. His idea was to use an array of tiny lenses to project a scene onto a single sheet of film. The multiple views these lenses recorded could then be reconstituted into a 3-D image by viewing the processed film through an identical lens array. Three years later, Russian physicist P. P. Sokolov constructed the first integral camera using a pinhole array instead of the harder-to-fabricate lenses that Lippmann envisioned. Building the Lytro camera, however, required technologies that would not be realized for almost another century.


In place of film and pinholes, the Lytro camera uses a thin sheet containing thousands of microlenses, which are positioned between a main zoom lens and a standard 11-megapixel digital image sensor. The main lens focuses the subject onto the sheet of microlenses. Each microlens in turn focuses on the main lens, which from the perspective of a microlens is at optical infinity.


It’s not easy to visualize, but with the help of a diagram [see “Microlenses Galore”], you can see what’s going on. Light rays arriving at the main lens from different angles are focused onto different microlenses. Each microlens in turn projects a tiny blurred image onto the sensor behind it. The light contributing to different parts of those tiny blurred images comes from rays that pass through different parts of the main lens. In this way the sensor records both the position of each ray as it passes through the main lens (the x and y) and its angle (θ and φ) in a single exposure. To understand why this lets you focus your picture later, think about what it means to focus a regular camera.


With a conventional camera, you have to adjust the focus so that all the light rays coming from one point on your subject converge at one point on the camera’s sensor. Depending on whether the subject is near or far, you move the lens in or out to achieve proper focus. A light-field camera doesn’t need to move its lens, because it can calculate the light field at any plane inside the camera. So it can generate images corresponding to various separations of lens and sensor, from close-ups to views of the distant horizon.


But capturing the full light field has a price. First, there’s a vast increase in the amount of data the camera must acquire. Second, there is a significant loss in resolution of the final image, which is effectively limited to the number of microlenses rather than the resolution of the camera’s sensor.


The transformation of the recorded four-dimensional light field into two-dimensional pictures requires computationally intensive Fourier-transform and ray-tracing algorithms. These in turn depend on access to processors that should be powerful, compact, and if the camera is to be a mass-market product, inexpensive. 


The Lytro camera, which costs just US $400 or $500 (depending on the memory capacity), indeed packs some meaty processing hardware inside. And yet it has only three controls—an on-off switch, a shutter button, and a zoom for its eight-power main lens. There are no other settings to adjust, no lighting options to consider, and definitely no manual controls. When you shoot a picture, there’s no shutter lag, although the camera takes around 5 seconds to show you an image. During that interval, the device computes what would have been recorded on several different virtual cameras with the focus set at various depths. When you then tap on the touch screen, the focus appears to shift to where you want it in just a fraction of a second. 


“It’s our ethos to have very sophisticated technology and algorithms and to [package] them in a way that’s very easy to use,” says Kurt Akeley, Lytro’s chief technology officer. “Simpler is better.” Images can also be viewed in desktop software (Mac only) or uploaded to Lytro’s servers and shared on websites like Facebook, where viewers can click to refocus on whatever part of the image they select. But that’s only a sample of what’s possible with a light-field camera, and not everyone agrees with Lytro’s keep-it-simple strategy.


“Plenoptics is about way more than refocusing images,” says Winston Hendrickson, vice president of engineering for digital imaging at Adobe Systems, in San Jose, Calif. “In capturing all the spatial and angular information about a scene, you can do things like motion parallax, changing the perspective, and detecting objects: Single-lens stereo becomes easy.” A motion-parallax function would allow the user to achieve a sense of depth by varying the vantage point slightly (this is the reason some birds keep bobbing their heads back and forth). “Lytro has a number of limitations, and I hope people see it as a work in progress,” Hendrickson says.


Lytro promises that parallax and 3-D imaging will soon be added to its desktop and online offerings. One limitation that the company can’t improve with software alone, however, is image resolution. At just 1.2 megapixels, Lytro’s images are dwarfed by the 10- to 16-megapixel photos people have come to expect from their digital cameras and even, increasingly, from their smartphones.


Lytro argues that limited resolution isn’t the drawback it appears to be. “There’s inertia in the marketplace to think about quality as it relates to resolution,” says Charles Chi, executive chairman at Lytro. “But when you think about how consumers use and share images today, it’s with cellphones and computers whose screens have less than 2 megapixels of resolution.”


Perhaps. But they also enjoy shooting and sharing video clips, another feature missing from the current Lytro camera. “If you don’t have video, it’s not going to get adopted,” insists Paul Gallagher of Pelican Imaging Corp., a start-up based just a few doors from Lytro in Mountain View, Calif. Pelican is betting that light-field photography will find a natural home in smartphones, which can leverage their powerful application processors for the necessary computations. 


“The generation of chips coming out the first half of this year should be more than adequate for light-field photography,” says Gallagher, who is Pelican’s vice president of business development and marketing. Pelican’s system eschews a single, space-hogging main lens in favor of a bug-eye-like array of 16 to 25 microcameras, carefully aligned with a traditional (but higher resolution) image sensor. The device will be capable of both 3-D and video imaging. Pelican is hoping it will appear in smartphones before the end of 2013.


While Lytro and Pelican test the consumer market, a German firm has for two years been quietly shipping light-field cameras for industrial use. Raytrix is targeting applications in research, microscopy, and optical inspection in manufacturing. Its cameras use yet another kind of optical setup, one championed by Todor Georgiev, an optics researcher at San Diego–based Qualcomm, who has over 50 patents in plenoptics. Georgiev has dubbed the new configuration Plenoptic 2.0 [PDF].

“Light-field cameras are happening now because of graphics processing units—GPUs,” says Georgiev. “Each GPU has hundreds of processors packed into one little device, working in parallel. They are the supercomputers of today.”

With the Raytrix system, as with the Lytro camera, there is a main lens, an array of microlenses behind it, and then the image sensor behind the array. Here, however, the microlenses are focused not at infinity (as in the Lytro camera) but on the image formed by the main lens in a plane that’s some distance in front of the sensor. The main lens and each microlens act like the two lenses in a refracting telescope, creating an array of tiny, sharp, inverted images on the sensor. With this optical arrangement, the number of microlenses no longer limits the effective resolution of the final image. Indeed, the images produced can theoretically approach the full resolution of the sensor, although this has been tricky to achieve in practice.


With a glass array of around 20 000 microlenses in its €20 000 (about $26 500) R11 camera, for example, Raytrix manages to produce 2.7-megapixel still images from an 11-megapixel sensor and video at up to six frames per second. Unlike with Lytro, though, the number-crunching hardware needed for Plenoptic 2.0 cannot fit into a stylish anodized case. A Raytrix camera must be linked through a gigabit Ethernet cable to a PC that contains a high-end Nvidia GeForce GTX 580 graphics card, which itself costs more than the entire Lytro camera. 


The Raytrix cameras are sophisticated enough to deliver on some of the more ambitious opportunities that light-field photography offers. For instance, the 20 000 microlenses in its R11 are not identical. Instead a mixture of three different focal lengths is used. That design sacrifices some lateral resolution but provides greater depth of field—the range through which everything in an image is sharp—up to six times what you’d get with a conventional camera using the same main lens. That’s useful for optical inspection of components using macrophotography, gives striking all-in-focus results with telephoto pictures, and extends the distance over which Raytrix can calculate 3-D views.


And Georgiev isn’t done testing the waters of Plenoptic 2.0. He is now experimenting with microlenses that have different aperture sizes to give his cameras higher dynamic range, enabling them to simultaneously capture detail in both the darkest shadows and the brightest highlights. He also envisions adding more color filters, to make single-exposure multispectral imaging possible, or adding polarizing filters, which would attenuate reflections to whatever degree the user wants, allowing crystal clear photos through glass or water.


Creating groundbreaking technology is one thing; getting consumers and companies to buy it is another. Both Lytro and Pelican Imaging hope to harness viral marketing—the wow factor that gets people talking about their products. But they are up against a $40 billion imaging industry that remains fixated on high pixel counts, powerful zoom lenses, and an ever-growing variety of flashy features. For most camera companies, and for most consumers, light-field imaging isn’t on the agenda yet. “There’s definitely a lot of education that needs to be done,” says Lytro’s Chi.


Like Apple, Lytro emphasizes style, design, and ease of use. Also like Apple, Lytro favors a closed vertical ecosystem, from capture to playback. That closed ecosystem, however, prevents other companies from working with its light-field files. “It was the inability of getting plenoptic data from Lytro that led us to doing our own research,” says Adobe’s Hendrickson. “We will introduce light-field editing in Photoshop when the time is right, but it’s not ready for a broad scale right now.” Georgiev shares this frustration. “If you buy a Lytro camera and you want to do, say, 3-D video, you can’t, unless Lytro provides the software. I’m afraid Lytro may be shooting themselves in the foot,” he says. “At first it looks like they’re gaining competitive advantage, but it’s actually closing the door for collaboration and progress.” Lytro’s position is that it wants to develop the technology to a level of stability before releasing its file format for wider use.


Pelican

pelican camera module

Photos: Pelican Imaging

Smartphone Cam: Pelican Imaging hopes to bring light-field photography to smartphones with a bug-eye-like camera array [above]. Images could then be focused at different depths, as shown in the top left and top right photos, after they are taken.

Click on top image for a larger view.

In this fast-moving field, though, such stability may be a long way off. Raytrix, for example, is already developing specialized microlenses to boost resolution for 3-D facial recognition in security systems, and the company promises to have video cameras capable of recording high-definition (1080p) plenoptic movies at 30 frames per second by the end of the year. These new cameras could revolutionize 3-D television and film, which currently require expensive stereo cameras and often time-consuming postproduction work. Not only could 3-D plenoptic movies be refocused after they were shot, they could also provide personalized experiences for different viewers, such as customizable stereo separation to give more natural 3-D effects (and fewer headaches for viewers).


Georgiev’s dream is to build a large plenoptic camera that can capture multiple 3-D views and generate an image that’s indistinguishable from what you get looking out a window. “Today, I can capture 10 views and interpolate the rest,” says Georgiev. “But give me a 200-megapixel sensor and it would be much, much better.”

While few photographers want or need the 40-megapixel sensors found in some of today’s top-end models, tomorrow’s plenoptic cameras will require all that resolution and more. “We can use as many pixels as we can get our hands on,” says Lytro’s Akeley. “There are huge opportunities to build cameras that capture many more rays and as a result produce sharper, bigger pictures, with greater features.”


Ultimately, though, the main obstacle to the success of light-field imaging may hinge on the culture of photography rather than on its technology. Lytro’s Chi concedes that many professional photographers feel threatened by technology that hands creative control to the viewer. Come the next generation of plenoptic cameras—which will allow you to adjust the picture’s exposure, shift colors, remove reflections, and jump into 3-D at the click of a mouse—such reactions are bound to get even stronger.

How consumers will react is a big question mark too. Fixing a blurred image is one thing, but do we really want to spend hours tinkering with our pictures? Will light-field technology democratize photography or destroy it, splintering a form of art into billions of ultrapersonalized collections that mean little to anyone else?


Christopher Rauschenberg, owner of the Blue Sky photo gallery in Portland, Ore., offers a simple observation: “Viewers already decide what they see in a picture. Pick any artwork that speaks to you, think about what you see in it, and then ask someone else. They will see something different.” 


Beauty has always been in the eye of the beholder. But now, thanks to light-field photography, focus and perspective, among other attributes, will be too.


This article originally appeared in print as “Focusing On Everything.”

About the Author

Mark Harris, a Seattle-based freelancer, used to be the reviews editor for Digital Camera magazine in the United Kingdom, so he was quite eager to write about the new “light-field” or “plenoptic” cameras, which allow photos to be refocused after they’re taken. But Lytro’s new light-field camera was a letdown, he says. “After just a handful of shots, it started to feel gimmicky.” Harris expects the thrill to return when more sophisticated models emerge. “I can’t wait for the first consumer Plenoptic 2.0 cameras, with higher resolutions and 3-D imaging.”