Monday, December 20, 2010

Computational photography replaces or supplements optics with computer power

Computational Photography
This abstract diptych consists of the right side of two pictures I took with my point-and-shoot of a white door.

The lines on the right show the curved "barrel distortion" (only slightly exaggerated for illustrative effect in Photoshop) you normally get near the edge of the frame with most zoom lenses at the wide setting (at the tele setting they normally produce some "pincushion distortion"). The straight lines on the left are straight out of the camera. The photo on the left was taken with distortion control set to on.

The distortion control was done, not optically, but by the camera's computer in processing the image. It had the effect of making the lens of my relatively inexpensive point-and-shoot the functional equivalent of a lens costing many times what the entire camera did.

It's a part of something called computational photography that constitutes what might be called the second phase of digital photography, one that is rapidly changing what we can do with our cameras. In the first phase, digital cameras used their computers to render the images produced by electronic sensors in a form that roughly mimicked what film could do. Now the computers in our cameras are adding a host of functions that could never be performed with film, or only performed awkwardly and expensively.

Computational photography is about replacing or supplenting optics with computers. Distortion and perspective control (eliminating slanting lines when a camera is pointed upward) are two examples. Another is automatically compensating for lens flaws that are always present to a greater or lesser degree. Designing a lens is always a matter of trade-offs between various kinds of aberrations and distortions. The better the trade-offs are managed, the more expensive the lens. But now manufacturers can optimize lenses for inexpensive production and correct the results in software, which is much cheaper.

Another example is the High Dynamic Range (HDR) capability that's being built into more and more cameras. By shooting two or more pictures in quick succession and combining them in software, the camera can render pleasing shadow detail without blowing out the highlights.

This is just the beginning. As this article about computational photography in the NYT points out, experimenters are already doing things that seem straight out of science fiction, such as cameras without lenses and cameras that can shoot around corners using lasers.

In other words, a lot of today's equipment will become obsolete as we change our ideas of what digital photography can and should do -- just the way manufacturers like it.

No comments: