Search
Menu
PowerPhotonic Ltd. - Bessel Beam Generator LB 6/24

A Faster Single-Pixel Camera Opens Up New Possibilities for Industrial Lensless Imaging

Facebook X LinkedIn Email
CAMBRIDGE, Mass., April 3, 2017 — Compressed sensing is a new computational technique for extracting large amounts of information from a signal.

Examples of the compressive ultrafast imaging technique.
Examples of the compressive ultrafast imaging technique. Courtesy of the MIT Media Lab.

Researchers at Rice University built a camera that could produce 2D images using only a single light sensor rather than the millions of light sensors found in a commodity camera. However, that single-pixel camera needed thousands of exposures to produce a fairly clear image. Now, the Massachusetts Institute of Technology (MIT) Media Lab has improved the Rice idea and developed a new technique that makes image acquisition using compressed sensing 50 times as efficient, reducing the number of exposure from thousands to only dozens.

Compressed-sensing imaging systems, unlike conventional cameras, don't require lenses, making them potentially useful in harsh environments of in applications that use wavelengths of light outside the visible spectrum. Getting rid of the lens opens new prospects for the design of imaging systems.

"Formerly, imaging required a lens, and the lens would map pixels in space to sensors in an array, with everything precisely structured and engineered," said Guy Satat, a graduate student at the Media Lab. "With computational imaging, we began to ask: Is a lens necessary? Does the sensor have to be a structured array? How many pixels should the sensor have? Is a single pixel sufficient? These questions essentially break down the fundamental idea of what a camera is. The fact that only a single pixel is required and a lens is no longer necessary relaxes major design constraints, and enables the development of novel imaging systems. Using ultrafast sensing makes the measurement significantly more efficient."

Excelitas PCO GmbH - Industrial Camera 11-24 VS MR

The new compressed-sensing technique depends on time-of-flight imaging, in which a short burst of light is projected into a scene, and ultrafast sensors measure how long the light takes to reflect back.

While the technique uses time-of-flight imaging, one of its potential applications is improving the performance of time-of-flight cameras. It could have implications for a number of other projects such as a camera that can see around corners and visible-light imaging systems for medical diagnosis and vehicular navigation.

The reason the single-pixel camera can make do with one light sensor is that the light that strikes it is patterned. One way to pattern light is to put a filter in front of the flash illuminating the scene. Another way is to bounce the returning light off of an array of tiny micromirrors, some of which are aimed at the light sensor and some of which aren't.

The sensor makes only a single measurement — the cumulative intensity of the incoming light. If it repeats the measurement enough times, and if the light has a different pattern each time, software can deduce the intensities of the light reflected from individual points in the scene.

Compressed sensing works better the more pixels the sensor has. And the farther apart the pixels are, the less redundancy there is in the measurements they make. And, of course, the more measurements the sensor performs, the higher the resolution of the reconstructed image.

The MIT research has been published in the journal IEEE Xplore (doi: 10.1109/TCI.2017.2684624).

Published: April 2017
camerasFilterslensesmirrorsMIT Media LabMassachusetts Institute of Technologylensless imagingImagingResearch & TechnologyeducationOpticsindustrialGuy Satatsingle-pixel cameraTechnology News

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.