Here’s a rather badly written introduction/overview on light-field photography I’ve written.
Light Field Photography
The purpose of light field photography is to capture the intensity of the rays of light travelling in every direction through every point in space of a scene. At first this may sound intimidating, but the concept is incredibly simple.
Conceptually the idea is similar to Ambisonics or wave field synthesis, in that it captures a representation of a scene that can be decoded into a multitude of formats, or if accurately captured and reproduced , when played back, is indistinguishable from viewing the scene with your own eyes (i.e. your eyes could focus on various depths within the scene, and both your eyes would receive slightly different images, just like in real life). Unfortunately only a few expensive display technologies currently exists that can reproduce a full light field.
The concept can be quite difficult to visualise because human eyes have built in lenses that focus the rays of light in a scene onto a two dimensional plane with intensity values, effectively reducing the amount of information in a scene that a human can “see” at any single point in time.
Light field photography can be split into 2 main areas, capturing and reproduction. For both of these areas no perfect solution is available, but compromises can be made that produce interesting and potentially useful results.
The ideal way to capture a light field would be to have a very dense 2 dimensional array of cameras. With each of the cameras capturing images with an infinite depth of field (everything in focus). All cameras would capture a frame at exactly the same time. The result of this would be a large number of images all captured from slightly different positions with every part in focus.
There are a number of ways to make this practical. Firstly is creating a lens that is similar in structure to the multi-lensed eye of a fly. If created correctly this lens could capture multiple images from slightly different positions on a single cameras CCD at the same time. Although this method relies on the capturing camera having a very high resolution CCD sensor, it is probably the most elegant solution for capturing light-field video.
Another solution is to use a single camera on a platform that moves in either 1 or 2 dimensions. To capture a light field image the camera could be moved to the required positions mechanically, effectively emulating a dense multi-camera set-up. This of course has the downside of taking a great deal of time to capture a single light-field image, but the advantage of being able to emulate an almost perfect light-field camera.
It is also possible to emulate pretty closely an ideal multi-camera system using a specially constructed rig to hold multiple cameras, and some method of syncing the frame capture of each. The main downside of this is cameras aren’t physically small enough to make a dense grid and the quality of the resulting light-field representation will suffer.
As stated previously only a few expensive/impractical technologies currently exists that can reproduce a full light-field, but because light-field (ideally) captures all the available light information in a scene it can be “decoded” in number of interesting ways, some of which I will describe here.
Because the light-field images are captured by taking a number of “standard” images form slightly different positions it is trivial to reproduce a stereoscopic image that can have and adjustable appearance of 3D depth. This can be done by variably selecting 2 of the “standard” images that are horizontally displaced (Useful for calibrating 3D displays and standard 3D capturing rigs). This also leads to the possibility of creating depth maps and 3D models from light-field images using standard stereoscopic processing techniques (which could potentially have high accuracy due to the number of images from different positions captured).
Also light-field images can be used to produce “standard” images that can be dynamically refocused to bring certain areas of the image in or out of focus in post processing. This could potentially be augmented using eye tracking to bring an area of an image that a user is looking at into focus, based on what part of the image the user is looking at. With further processing it is also possible to create images with adjustable depth of fields (i.e. being able to select the depth rang of an image that is in focus).
This is by no means a full list of the potential ways to capture and reproduce light-field images, there are no doubt numerous other techniques either currently in existence or yet to be developed. It hopefully provides an overview of some of the methods, techniques and potential in a field that is yet to be fully explored.
http://graphics.stanford.edu/projects/lightfield/ -Overview of some light-field technologies from Stanford.
http://www.youtube.com/watch?v=9H7yx31yslM -Demo video of dynamic refocusing by Stanford.
http://www.futurepicture.org/?p=47 -A multi-camera light-field set-up created by futurepicture.org
http://www.notcot.com/archives/2008/02/adobe-lightfiel.php -Adobe multi-lens light-field lens on single camera body.
http://www.youtube.com/watch?v=FF1vFTQOWN4 -Volumetric light-field display.
http://www.umiacs.umd.edu/~aagrawal/sig08/BuildingLightFieldCamera.html -Pinhole light-field lens on single camera body.
http://en.wikipedia.org/wiki/Light_field -Wikipedia article.