|Light Field Research|
Also, on my way out the door to Montreal (with a quick stop in Cuba "on the way"), I've put some movies of my research results online. They're uncompressed .avi files, so my apologies for the size. You can check them out in this directory.
What is a light field?
A light field is a 4D data structure which records the set of light rays which permeate a static 3D scene. Four dimensions are required to do this because each light ray has both a position and a direction, though each ray remains constant along its direction of propagation (in the absence of occlusions).
Why are they useful?
Light fields were first used as a means to render computer graphics with a high speed independent of scene geometry. More recently, they have been explored as a kind of intermediary between the real world and computer vision techniques. Most computer vision techniques begin working on a frame-by-frame basis by throwing out information, attempting to boil down each captured image to its bare essentials – through edge detection, for example. This differs from light field approaches which use a series of images, first storing all the information available in each frame and then processing the resulting light field to extract desired information. Because more information is available to the decision making processes, more powerful results can be obtained.
Once a scene has been captured as a light field, it can be processed in all the same ways as a 1D sound clip, a 2D image, or a 3D video (video is 3D because it has 2 spatial and 1 temporal dimensions). Because a light field contains all the information about the way a scene appears, including lighting, shadow, and surface behaviors such as diffuse and specular reflection, all within a single 4D array, there is a potential to accomplish very complex tasks using simple techniques.
What do I do?
The majority of my research has centered on the realization that a single point in 3D space maps to a plane in a light field, and that the orientation of this plane depends only on the depth of the point in the scene (relative to the first reference plane). This realization bears a striking resemblance to previous work by Dr.Bruton in video processing, based on the observation that a point with a linear trajectory exists as a line in a 3D video signal.
Here's a rendered view of a virtual scene modeled as a light field. This image was produced by measuring the required rays from a virtual scene model using a raytracer.
The resulting model can be viewed interactively - that is, you can control the position of the camera, and have the rendered scene respond in realtime.
Here's an image of the camera gantry - click the image for a larger version.
And a rendered view of a real scene modeled as a light field using the camera gantry.
Again, the resulting model can be viewed interactively in realtime.
The frequency planar filter extracts objects from a scene which lie at a single, prescribed depth. All other scene elements are attenuated and blurred. The following are a color version of the results shown in the paper. The first image is the input light field. The following images have been filtered to extract the poster and coaster, respectively.