Hyperspectral Stripe Projector Combines 3D, Spectroscopic Imaging
A projector capable of making images that elude the capture capabilities of traditional cameras could, when paired with a monochrome sensor array and sophisticated programming, enable 3D spectroscopy on the fly. On its own, the compact Hyperspectral Stripe Projector (HSP) that Rice University researchers have engineered supports a new method for collecting the spatial and spectral information necessary for self-driving cars, machine vision applications, corrosion detection, and more.
In real time, the HSP delivers 4D information from an image — three spatial and one spectral. Whereas one can currently acquire that information using multiple modulators and bright light sources, using their new projector, the Rice University researchers received 4D information with a light source of normal brightness and optics.
HSP compresses data from each of its pixels and reconstructs the data into a 3D map with spectral information. This information can incorporate hundreds of colors, ultimately revealing both shape and material composition of an object. RGB cameras commonly provide only three spectral channels; a hyperspectral camera delivers spectra in many channels.
The researchers captured red at around 700 nm and blue at around 400 nm, with bandwidths at every few nanometers (or less) between. The result was fine spectral resolution and a correspondingly fuller understanding of the imaged scene.
Patterns adorn a static model used to test Rice University’s HSP, which combines spectroscopic and 3D imaging. Barcode-like black-and-white patterns are displayed on the DMD to generate the hyperspectral stripes. Courtesy of the Kelly Lab.
Because HSP encodes the depth and hyperspectral measurements at once in a single, efficient process, a system using HSP can use a monochrome camera instead of a hyperspectral camera — the latter of which is a more expensive (and previously necessary) component.
HSP uses an off-the-shelf digital micromirror device (DMD) to project patterned stripes onto a surface. Passing the white-light projection through a diffraction grating system separates overlapping patterns into distinct colors. Each color is reflected to the monochrome camera. The camera finally assigns a numerical gray level to that pixel. If a pixel reflects multiple color stripes, it can have multiple levels. Each level is recombined into an overall spectral value for that part of the captured object.
This all occurs in a compact configuration, said Libu Xu, lead author of the study describing HSP. Folding a light path back to the same diffraction grating and lens prevents the optical design from expanding to the point that the system loses light or performance ability.
“The single DMD allows us to keep the light we want and throw away the rest,” Xu said.
A 3D point cloud of objects reconstructed by HSP-based imaging system. The monochrome camera also captures spectral data for each point to provide not only the target’s form, but also its material composition. Courtesy of the Kelly Lab.
Many applications that HSP can support rely on visible light, though the finely tuned spectra can extend beyond visible light. What they reflect to the sensor, as multiplexed fine-band spectra, are helpful in identifying a material’s chemical composition. Simultaneously, pattern distortions are reconstructed into 3D point clouds. This is like a picture of the captured target, though with more data than a plain camera snapshot provides.
“I can envision this technology in the hands of a farmer, or on a drone, to look at a field and see not only the nutrients and water content of plants but also, because of the 3D aspect, the height of the crops,” said Kevin Kelly, an associate professor of electrical and computer engineering at Rice’s Brown School of Engineering. “Or perhaps it can look at a painting and see the surface colors and texture in detail, but with near-infrared also see underneath to the canvas.”
In the automotive sector, Kelly said he envisioned building HSP into car headlights that could then differentiate between an object and a person.
Size reduction, as well as adaptation to enable compressive video capture, are the next steps Kelly identified for the Rice University research lab working with the device.
The National Science Foundation funded the research. The work by Kelly, Xu, and Anthony Giljum was published in
Optics Express (
www.doi.org/10.1364/OE.402812).
LATEST NEWS