Optical Processor Captures Scenes in Spatially Incoherent Light
A research team led by professor Aydogan Ozcan at the University of California, Los Angeles (UCLA) developed a deep-learning-based approach to designing spatially incoherent, diffractive optical processors. The method provides a way to build all-optical visual processors that work under natural light. Following deep learning, the diffractive optical processors can transform any input light intensity pattern into the correct output pattern.
The researchers believe that their design for diffractive optical processors will have broad application, in addition to contributing to the quest for a fast, energy-efficient alternative to electronic computing for future computing needs.
Since natural lighting conditions typically involve spatially incoherent light, the ability to process visual information under incoherent light is crucial for applications that require ultrafast processing of natural scenes, like autonomous vehicles. The capability to process information under incoherent light is also useful for high-resolution microscopy applications that include spatially incoherent processes such as fluorescence light emission from samples.
The diffractive optical processors are made from structurally engineered surfaces that can be fabricated using lithography or 3D-printing techniques. The structured surfaces use the successive diffraction of light to perform linear transformations of the input light field without using external digital computing power.
Universal linear intensity transformations using spatially incoherent diffractive processors. Courtesy of the Ozcan Lab at UCLA.
The researchers used numerical simulations and deep learning, administered through examples of input-output profiles, to demonstrate that, under spatially incoherent light, the diffractive optical processors can be trained to perform any arbitrary linear transformation of time-averaged intensities between the processor’s input and output fields of view.
The researchers designed spatially incoherent diffractive processors for the linear processing of intensity information at multiple illumination wavelengths operating simultaneously. They demonstrated that using spatially incoherent broadband light, it is possible to simultaneously perform multiple linear intensity transformations, with a different transformation assigned to each spatially incoherent illumination wavelength.
Additionally, the researchers numerically demonstrated a diffractive network design that performed all-optical classification of handwritten digits under spatially incoherent illumination, achieving a test accuracy of greater than 95%.
The team’s numerical analyses showed that phase-only diffractive optical processors with shallow architectures — for example, processors that have only one trainable diffractive surface — are unable to accurately approximate an arbitrary intensity transformation, irrespective of the total number of diffractive features available for optimization. The researchers further found that, in contrast, phase-only diffractive optical processors with deeper architectures — for example, processors with one diffractive layer following others — can perform an arbitrary intensity linear transformation using spatially incoherent illumination with negligible errors.
These findings can be used to build all-optical information processing and visual computing systems that use spatially and temporally incoherent light, for example, to visualize natural scenes. Diffractive optical processors also have the potential to support applications in computational microscopy and incoherent imaging that feature spatially varying engineered point spread functions.
The research was published in
Light: Science & Applications (
www.doi.org/10.1038/s41377-023-01234-y).
LATEST NEWS