Chronology | Current Month | Current Thread | Current Date |
[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |
By placing an array of microlenses in front of a camera sensor==>The thesis seems to me to be remarkable lucid. Though I have had just a few minutes to scan the effort, I sense that a microlens array near the sensor plane that can capture light ray directionality (which of course requires at least three sensors per pixel in each color) amounting to 5 dimensions of input in each color, provides the wherewithal to compute an image for any desired optical plane. The cost is the reduced definition available at the selected focus - but with sensor arrays passing well beyond 15 Mpixels, this seems not to have been a stumbling block.
https://www.lytro.com/science_inside
Scroll down to the bottom for a link to the Stanford Phd thesis behind
this.
Bob Sciamanda
Physics, Edinboro Univ of PA (Em)
treborsci@verizon.net
http://mysite.verizon.net/res12merh/
_______________________________________________
Forum for Physics Educators
Phys-l@carnot.physics.buffalo.edu
https://carnot.physics.buffalo.edu/mailman/listinfo/phys-l