An Interactive Introduction to Lightfield Images
Imagine being able to take a photograph and then change the angle from which it was taken years later. Lightfield imaging allows you to do just this (amongst other magic-seeming things) and better yet, its a remarkably simple idea at heart.
In this article I will explain how lightfield imaging works using a series of interactive examples. Through these examples we'll see how regular pictures are taken, what a lightfield actually is and finally, how we can use one to change the perspective from which a picture was taken. This article is aimed at those who want to gain a feel for how lightfield imaging works without touching on the mathematics.
Flat Earth Theory
To keep things simple, we'll consider a 2D world and a camera which captures 1D photographs, though all the same principles apply to the real world. The figure above shows our 2D world which contains a number of colourful lines and a camera (represented here as an eye).
When the camera looks out at the scene, rays of light (shown as faint lines in the diagram) shine off lines in the scene and towards the 'focal point' of the camera.
The image captured by the camera is shown to the right of the scene. Images are created according to the colour and angle of the rays of light entering the camera. For example, if the left-most ray of light entering the camera is yellow, the left part of the image will also be yellow. Because our camera is one-dimensional, our picture is actually just a line but we've drawn it here stretched vertically to make it easier to see.
Try dragging the camera around with your mouse to see how the image changes according to the rays of light entering the camera. Make sure you've got a feel for how the image shown is being produced by the camera before moving on.
Notice that in this figure our camera only captures a few rays of light producing a fairly low-resolution image. Try increasing the resolution by clicking the "Enable HD" button. You should notice the image now contains finer details than before. Once you're finished, turn HD mode off again before moving on: it will make the next diagram make easier to understand!
Lets take a series of pictures from a several positions along the bottom of our 2D world. Click the animation to watch (turn off HD mode to make things clearer). Each time the camera takes a new picture, the rays of light captured by the camera are retained and the captured image appears on the right.
At this point we have a collection of views of the scene from different angles. As we can see from the mess of light rays criss-crossing the scene, we've captured a whole range of light rays travelling in all sorts of directions. Together, all these light rays are known as a lightfield. We'll see in the next step how we can turn this light-field into pictures of our scene from new camera angles.
By the way, real 'lightfield cameras' would take all of these pictures in one go, capturing all those rays of light (the whole lightfield) at a single moment in time. You can think of a lightfield camera as being a bit like a bunch of smaller cameras strapped next to each-other.
Imagine a camera placed somewhere in a lightfield, as illustrated in the scene above. That camera would capture a set of rays of light which meet at its focal point. If you look closely, it can be seen that the rays captured by the camera (nearly) coincide with rays in the lightfield. Based on this observation, we can synthesise the view our camera would see by replacing each ray which would be captured by the camera with the nearest ray captured in the lightfield. By looking up the colour of the relevant rays in the lightfield, a new image can be synthesised.
Click the 'Play Animation' button to see how the rays captured by our camera can be replaced with nearby rays in the lightfield. As the animation plays, the pixels from our lightfield images are highlighted corresponding with the rays being used to approximate the camera's view.
Finally, the smaller image below the scene and lightfield images shows an images captured by an actual camera at the location shown (top) and the image synthesized from the lightfield (bottom).
Try moving the camera around the scene and watch how the set of rays used from the lightfield changes and how the synthesised image (mostly) matches the view captured by the camera.
As you can see, the synthesised images suffer from artefacts where rays in the lightfield don't exactly correspond with those for the camera. This effect becomes less pronounced the denser the lightfield is made. Click the 'Enable HD Mode' button to see how using a much denser lightfield improves the synthetic images.
There are some problems which increasing the density of rays in the lightfield cannot correct. For example, try tilting the camera so that it is looking horizontally. Many of the rays required will be at a very shallow angle and don't have a corresponding ray in the lightfield. Consequently, these rays cannot be approximated at all.
For another limitation, try moving the camera within the lines in the scene. You'll find that garbage images are produced which don't remotely match the image captured by a camera in this location. This problem is caused by the fact that the rays in the lightfield do not extend to the focal point of the camera and therefore don't approximate the camera's view. Lightfield images might let you go back in time and change your pictures but they won't let you see through walls!
If you're interested in finding out more about lightfields, the paper 'Light Field Rendering' by Marc Levoy and Pat Hanrahan (from SIGGRAPH '96) (PDF) is a great start.
If you want to find out how the diagrams on this page work, take a look at the source code which contains a fair quantity of explanatory comments.