Lightfield Photography

It has become more rare lately for me to run into a new technology where I’m completely clueless how it works and I have that “FM” moment. (“Friendly Magic” is the polite translation).

In computer geek circles, when someone relatively clueless (not a pejorative, a technically accurate description. They may be highly cluefull, but relatively clueless on that point in question) asks in all innocence “How did you do that?”, often about a subject that will take a day or two to adequately explain, the answer may just be “F.M.” and a grin 8-)

For me, I loved that “This is just F.M.” feeling when seeing some technology for the first time; and especially as a kid (and even well into my early professional career) had it often. I would seek out Tech Magazines for the newest “way cool and WTF how did they do that?” F.M. 8-) Moment.

But as the mind’s hoard grows, the F.M. 8-) moments become more rare. One of natures cruel ironies. Those folks who love tech the most for just that marveling at F.M. 8-) moment are the ones doomed to have if fade the fastest as they come to have more “Ah Hah!” moments, eventually becoming more “oh, yeah, that…” moments.

(The F.M. moments are not the same as the “Oh Bother” moments where you know you can work out some technical issue and have clue about the basic idea behind it, but also know it’s going to take you more time than it is worth and there will be no “Oh Boy” moment at the end. So, for example, figuring out how GIStemp was screwing up the temperature data was an “Oh Bother. I can do it, but it’s drudge work.” process, not a moment of joy…)

So after a fairly long drought of such a moment, I was just presented with one.

Listening to NPR they were talking about a “new kind of camera” and a “new kind of photography”. “Oh, sure…” I think. I know more about photography than most professional photographers. Wonder what thing I already know they think is new.

Well, turns out they were right. It really IS a whole new kind of photography. Called “Light Field” photography. You can take a photo with, say, a flower in focus in the foreground and the background blurry. Then later click on the background and make IT the focused zone and see what’s happing there… “How does it work? F.M. GRIN!!!”

I am extraordinarily happy to tell you that I have no clue whatsoever how they do it. GRIN 8-)

I’ve found a web page at Stanford (who did the early work) that summarizes it a bit and has links to the details on “how” and the physics. I’m going to put the link here. My guess is about a week to sort through it and turn the F.M. into “Ah Hah!”… but I’m not going to do that; at least not for a while. I’m going to cherish this F.M. moment for a while. Maybe even a few months. Each time I click on an image and it refocuses it’s just a GRIN all over again. I’m not going to let that go easily, and certainly not for a week worth of brain work… (Hint: It looks like one of the techniques is to use an array of microlenses or micromirrors to present a lot of data to a more normal sensor from which a “light field” is calculated. Yes, it’s F.M. all done with mirrors 8-) GRIN! )

Yes, you CAN click on online images and see the effect. They bundled the ‘recompute’ software into the web pictures…

So that’s all I’ve got for you on this from an “understanding” point of view. From here on out it’s just links. Well, that, and “I Want One!”…

The Links

A sample of images and sort of overview page:

http://dtechdaily.blogspot.com/2012/03/lytro-most-social-camera-ever-made.html

The Stanford Tech page:

http://graphics.stanford.edu/projects/lightfield/

The Lytro company page and the camera they are selling:

http://www.lytro.com/

A fairly productive web search on the topic (using a search engine that doesn’t spy on you)

http://duckduckgo.com/?t=ous&q=light+field+camera

A web search on the Lytro camera that put an explanation in the top panel that I’ve tried to avoid reading ;-)

http://duckduckgo.com/?t=ous&q=lytro+camera

A link to that story of “How the F.M. works” so that when I’m ready “For that day…”:

http://photo.stackexchange.com/questions/13378/what-are-the-basic-workings-of-the-lytro-light-field-camera#13429

The NPR story:

http://www.npr.org/2011/10/21/141597318/new-camera-focuses-shot-after-its-taken

Subscribe to feed

About these ads

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Science Bits and tagged , , , , . Bookmark the permalink.

20 Responses to Lightfield Photography

  1. agesilaus says:

    Nothing new about it. It’s focus stacking and has been around for quite awhile. You can do it in photoshop and there are standalone apps which also do it–Helicon has one. Microscope phototograpers are especially fond of it. This manufacturer has packaged it in a camera tho. I have heard hardly a comment about it in high end photo forums, mostly yawns. The implementation in this toy like camera is crude.

  2. Jason Calley says:

    Yes, uber-cool! This is an example of what is called a “plenoptic” camera. http://en.wikipedia.org/wiki/Light-field_camera

    I may be mistaken, but I think that this is NOT the same as focus stacking — but will obviously bow to anyone who knows the subject better than I do.

    Here is my rather idiosyncratic way of looking at it:

    Consider an object in space. Shine a light at it, and the wave front of light hits the object and bounces off. As it does so, the wave front becomes enormously, complexly distorted. Some parts of the wave are now lagging behind, because the part of the object from which they bounced was slightly more distant from the light source. Other parts of the wave lead. Some parts of the wave bounce in one direction, others bounce slightly askew. Varying parts of the wave now interfere constructively, others destructively. You have a real mess of a wave, but that scrambling is what carries the information about the object from which the light bounced. Dennis Gabor realized this, and realized that if you could reconstruct an exact duplicate of this wave front, you could then observe the wave and see the object just as if it were there for the original illumination. Of course what I am describing is a hologram.

    That was over half a century ago, and Gabor’s method is most suited for implementation with monochromatic light and chemical films. How boring! How 20th Century! How…. how ANALOG!

    Enter plenoptic cameras.

    Digital cameras, of course, do not actually capture pictures. They only capture discrete pixels. (Of course the same is true for film — just the pixels are crystals of silver compounds, and are very fine.) Still, if the pixels are fine enough, no one seems to notice. Good enough for practical purposes. So… can we do the same sort of “close enough” digital version of holographic wave front reconstruction? You bet. That is what a plenoptic camera does.

    Consider a normal convex lens. It forms a projected image at its focus. How does it do that? By taking the wavefront present at its front surface, and dividing it into various components based on their directional attributes. In other words, light from THAT direction gets gathered together and sent to one particular spot at the prime focus. Light from that OTHER direction gets gathered together and sent to some other spot at the prime focus, etc. Of course a pin hole will do the same, but a lens will gather a larger wave front. OK, pretty straightforward so far. In the real world though, there is a bit more information encoded in the image at the prime focus. Because the lens itself subtends some finite diameter, and because the objects near the camera are not at an infinite distance, the directional information of light from a specific spot on the object actually covers a (relatively small but important) range of values. This circle of confusion, this smearing of directional information, means that the image of any given spot on the object will be slightly smeared at the prime focus. In a plenoptic camera the micro lens array at the prime focus allows us to not only capture the large scale directional information of the wave front, we can examine the sensor data at adjacent microlenses and determine how much smearing (and hence distance to objects) we have measured. By combining all the data we can computationally reconstruct the large scale properties of the entire wave front. Assume for the sake of example that we have constructed our camera such that the central part of each individual microlens records the light beams coming through the central part of the primary lens. If we use those central pixels to reconstruct our picture, we get a crisp, clear picture. As we incorporate the pixel information that is increasingly off center of each microlens’ CCD array, we blur the image, but do so in a way which reconstructs distance information, not direction. If your software is good enough, you can choose which distance information to enhance and present, or which directional information to emphasize.

    What you have is an image which may be manipulated to give close focus, far focus, large or small depth of field, or eve (if we seperate left and right parts of the microarray) 3D images.

    Holograms — wonderfully elegant analog concept. Lightfield cameras — a pixelated and digital version of the same concept.

    It is a brilliant concept.

  3. Agesilaus says:

    Well maybe. At the price point of this Lytro I don’t believe any heavy number crunching is going on. They aren’t saying what the exact technology is but it sounds like they are using a micro lens array to generate 11 different focus points and then stack those. As you scroll thru the focal planes it just shows you the stack one by one. So it isn’t even focus stacking which gives you one image with extreme depth of field.

    If you’ve looked at these images they have a very limited range of depth of field from the first plane to the last.

  4. Paul Hanlon says:

    Ok, now I’ll show my ignorance. Are these the same sort of photos we get when doing Google Streetview?

  5. Chuckles says:

    Photogrammetry gone mad. It’s bad enough working with one pair of stereo images :)

  6. adolfogiurfa says:

    I would prefer a digital camera with an extended spectrum of sensibility. Are there any out there?

  7. Agesilaus says:

    Do mean something like a camera that can use infra-red or ultraviolet? You can use many standard cameras with a infra-red filter but exposures are very long. And you can have many cameras converted to have greater infra-red sensitivity. Life Pixel is one company that does this.

    As for ultraviolet, camera sensors are not sensitive to UV. There are specialty cameras available for UV use tho. I think Fuji makes or made one. Optics are a problem with UV since some glass is not transparent to UV. Lenses can be a lesser problem with IR.

  8. Verity Jones says:

    I’m happy to be totally ignorant of the how and just admire what it can do. Very cool.

  9. Bruce Ryan says:

    I’m very interested in this as it recreates the notion of viewing pictures like Ford did in “Blade Runner”.

    From what I have read though, the only method of working the photos is with in house software. And… it sounds like the promise is quicker than the than updates.
    With the notoriety maybe that will change.

  10. Jason Calley says:

    @ Bruce Ryan “I’m very interested in this as it recreates the notion of viewing pictures like Ford did in “Blade Runner”.”

    Still my favorite science fiction movie. :) If you are also a fan, you might be interested in reading this extraordinary speech http://deoxy.org/pkd_how2build.htm given by author Phillip K. Dick. All I can say is “He was not a normal person.”

    But to the photos — a plenoptic camera does allow you to take some liberties with focus, depth of field and even point of view just as in Blade Runner, but the limits are a bit more confining in the real world. No matter how good your electronics, your software and your optics, you still cannot see around corners. You are strictly limited to whatever you would have been able to see through the front lens. If you wish to change point of view, you can do so, but only as far as a view from the far left, far right, top and bottom of the lens.

    By the way, mentioning holograms again, a plenoptic camera would be almost ideal for taking photos which could be converted into holograms, a variety referred to as leslie stereograms (named after the inventor’s girlfriend, a young woman named “Sterogram.” No, just a joke, she was named “Leslie.) Again, the point of view of the hologram would duplicate that of a window positioned and sized like the camera lens.

  11. Chuckles says:

    AdolfoG, Are you talking about systems like the Foveon sensors in some of the SIgma cameras, or about hyperspectral scanners?

  12. david says:

    When bringing the background into focus, is it necessary in this process to lose the foreground?

  13. Jason Calley says:

    @ david
    “When bringing the background into focus, is it necessary in this process to lose the foreground?”

    No, not necessary at all, which is one of the benefits. With conventional photography, the normal way to keep both foreground and background in sharp focus would be to use a high f ratio, that is to say, use a lens which has a small diameter compared to its focal length. The ultimate version of this is a pinhole camera, which has a f ratio in the thousands. It is so resistant to poor focus that even no lens at all still works. Unfortunately, High F number lenses produce an image which is not as bright as low F number, so the photographer has to increase the exposure time to compensate. The Light field style camera can use a low f number (i.e., a short exposure which still gives adequate brightness) to gather the data, then use software to unscramble some of the blur that would otherwise be there. Since the “de-blurring” is taking place in a processor, you have greater choice on which part of the data gets de-blurred.

    There is a common software tool called “unsharp masking” which is used for deblurring conventional digital photos. I do not know for certain, but I would speculate that the plenoptic software uses a similar technique, only the data is consolidated from adjacent microlens array elements instead of adjacent pixels.

    By the way, speaking of E.M. and his experience of “FM”, when I first heard of unsharp masking, I had that “Ah! FM!” moment. If you are not familiar with how it works, here is the (admittedly simple) explanation I got. Suppose you have a digital image, one which you wish to make sharper and bring out some details. Imagine the image from a pixel point of view — a big rectangle made up of millions of pixels, rows and rows, each containing some particular number representing the amount of light it has received. OK, if you had TWO such images and just added all the pixel values from one into the values of the other, you would have “ADDED” the pictures together and you would see both images together, overlapping each other. Seems obvious. Now, back to our blurry image. Take the value of each pixel in it, and average that value with its neighbors, (and you do not have to do an arithmetic average — it could be some arbitrary function based on your particular circumstance and needs, but average is close enough). OK, what you have created is a blurrier image; you have smeared the data out and reduced the resolution. How does that help?! Easy. Don’t add, but rather SUBTRACT the new blurrier image from the original. (And yes, you will need to do some fairly obvious adjustments on the various values to keep the original brightness, color balance, etc.) So, you have subtracted a blurry image from your original image and you get a SHARPER image.

    Maybe I am simple minded, but to me that is FM! And yes, there are ways to do the same process with old fashioned photography and film. Ask the CIA boys who worked with the 1960 generation of spy satellites.

  14. Pascvaks says:

    The ‘spices of life’ come in many flavors, but there is a limit, and over time they become few and far between indeed. Savor every moment, make them last as long as possible. When you see the look of discovery in the eyes of another, or hear it in his voice, pause, be quiet, watch, remember, and envy. ‘This too will pass’, so the slave says to the victor in his moment of glory before the people of Rome. Quiet! (the old man said, ashe finished typing, he was about to press the ‘post comment’ key when he reached for his tea cup and took another sip –a little bigger than normal sip– and swallowed.. the tea went straight into his windpipe and he exploded all over the keyboard and monitor, minutes later, after regaining the use of his lungs, cleaning himself, etc., etc., he finally typed this and pushed “Post Comment”… one thing at a time, old man, one thing at a time, life is a circle.;~(

  15. Pascvaks says:

    PS: cough cough.. I have to say something unrelated to the topic at hand but this IS the place for it. I wandered off and spent a number of days trying to unravel sunspots, cosmic rays, sea surface temperature anomolies, and weather variation. I got nowhere but did learn a number of things. One of them was that if you do that you get behind at Chiefio and it takes a long while to catch back up to you all. I finally made it. (Now if I can only learn, once again, how to push ‘Post Comment’ and swallow;-)

  16. Chuckles says:

    Dp Review gives some thoughts

    http://www.dpreview.com/reviews/lytro

    Perhaps we could call it the point and crunch camera?

  17. Verity Jones says:

    Damn! I wrote a comment on March 1st, hit ‘post comment` then switched off and it didn’t go through. I think I was only saying that I was happy to be overawed and ignorant about this technology and just say “very cool”.

  18. E.M.Smith says:

    @Pascvaks:

    You don’t have to go anywhere… I was right here, but putting together a daily posting, and then looked at this thread and found myself a dozen comments behind… It’s pretty bad when you can’t keep up with your self ;-)

    Note to self: Push buttons, THEN tea… ;-)

    @Verity:

    For some reason several of yours were in the SPAM queue… I fished them out…

    The prior one is now ‘up thread’ a ways….

    Despite my best efforts, I seem to be figuring out how it works and some of the magic is fading… but it IS a joy to have a bit of it again, even if only for a while…

    One thing I’m wondering, now, though: Might insects be using something like this with their “compound eyes”? Maybe not a ‘primitive’ as we have thought…

    @chuckles:

    The review answered one of my questions. 1.2 Mpixels. I knew it would take a resolution hit…

    @Adolfo:

    There is a ‘black art’ of removing the IR filter from digital cameras to make DIY IR range. Don’t know about UV… You can find only conversion plans / steps…

    @David:

    I want to think that you could, by added processing, get it all in focus… but that would likely take some improved software (allowing you to mark areas, focus and freeze them, then repeat on the next area)… The information is all there, it’s just what processing you do…

    @Jason Calley:

    Oh No…. 2 FM moments in the same page…

    @Paul Hanlon:

    I think google street view is just regular lenses / cameras.

    But I’d bet someone is working on it now ;-)

    @Bruce Ryan:

    The difficulty for the Blade Runner aspect is that they had LOTS of resolution. This camera has very little. The ‘change focus’ feature comes at the cost of resolution..

    But by Blade Runner days…. it might just happen ;-)

    @All

    I wonder how hard it would be to make spy sats that took this kind of picture. Some kind of giant synthetic aperture thing… so giant effective lens, long effective exposure time, and ‘any focus or POV you like? Hmmmm….

  19. Jason Calley says:

    One thing I’m wondering, now, though: Might insects be using something like this with their “compound eyes”?

    Oh! E.M., that sort of wondering is exactly why I love hanging around here. Very good point — and so immediately done by fairly simple wiring changes (neurally speaking) that I would bet right now that there is some truth in it. Evolution sometimes misses a trick, but She catches many more than she drops.

    “The review answered one of my questions. 1.2 Mpixels. I knew it would take a resolution hit… ”

    That resolution hit is an artifact of the process which throws away most of the information when the 2D image is created. If the data were used to generate 3D images, it could all be used — though it is true that the number of pixels presented from any given viewing angle would still be 1.2Mp
    Compound eyes, and microlens arrays are a very interesting subject. About 20 years ago I was very active in amateur astronomy and spent a lot of hours pondering the subject. I was trying to design a strictly optical way of making night-vision equipment, say something with a brightness increase of 1,000. Sounds simple; after all there are cheap ways of doing so with electronics. In fact, I saw a $49 toy — but still working — night vision scope in a Toys R Us. So, how to do the same thing in optics. My goal was a 1X magnification (I.E., no magnification at all) and 1,000 the brightness. I never did find a practical, relatively cheap or easily built solution, but the mental exercise led me quite a ways down the micro-lens path.

  20. Jason Calley says:

    Looks like MIT has developed a camera that actually CAN see around corners.

    http://www.petapixel.com/2012/03/20/mit-unveils-camera-that-can-see-around-corners/

    Conceptually speaking, this is an small scale and optical version of using echoes from explosives to chart underground geological discontinuities.

Comments are closed.