Nanophotonic Processor with Optical Camera Could Improve AI Efficiency
Increasing demand for high-performance AI has engendered interest in using photonic processing instead of conventional electronic processing for AI computations. Optical computing has the potential to boost AI’s computational throughput, processing speed, and energy efficiency by orders of magnitude.
But first, optical neural networks must achieve recognition accuracy that is on par with electronic neural networks. A nanophotonic neural network, developed by researchers at the University of Washington and Princeton University, aims to overcome this limitation.
The researchers embedded parallelized optical computation into flat camera optics 4 mm in length. The camera performs neural network computations during image capture, before recording on the sensor. The team developed a spatially varying convolutional network learned through a low-dimensional reparameterization and incorporated the network inside the camera lens with a nanophotonic array with angle-dependent responses.
The nanophotonic neural network achieves an image classification accuracy of 72.76% on the CIFAR-10 database and 48.64% on ImageNet (1000-class), shrinking the gap between photonic and electronic AI while ensuring generalization to diverse vision tasks without the need to fabricate new optics.
“This is a completely new way of thinking about optics, which is very different from traditional optics,” professor Arka Majumdar said. “It’s an end-to-end design, where the optics are designed in conjunction with the computational block. Here, we replaced the camera lens with engineered optics, which allows us to put a lot of the computation into the optics.”
The compact camera prototype, shown here, uses optics for computing, significantly reducing power consumption and enabling the camera to identify objects at the speed of light. Courtesy of Ilya Chugunov/Princeton University.
The approach to computer vision demonstrated by the researchers’ prototype could be used, for example, in autonomous vehicles, robotics, medical devices, and smartphone applications. “Nowadays, every iPhone has AI or vision technology in it,” professor Felix Heide said.
With a compact footprint and CMOS sensor compatibility, the optical system is both a photonic accelerator and an ultracompact computational camera.
Instead of a traditional lens, the camera uses an array of 50 metalenses — flat, lightweight optical components that manipulate light — to pick up different features of the object. The metalenses also function as an optical neural network.
“Our idea was to use some of the work that Arka pioneered on metasurfaces to bring some of those computations that are traditionally done electronically into the optics at the speed of light,” Heide said. “By doing so, we produced a new computer vision system that performs a lot of the computation optically.”
The accuracy of the system is comparable to conventionally supported neural networks. Because it performs many computations at lightspeed, it can identify and classify images more than 200 times faster than neural networks that use conventional computer hardware. The optics in the camera are powered with light instead of electricity, reducing power consumption significantly.
Instead of using a traditional camera lens made of glass or plastic, the optics in the camera rely on layers of 50 metalenses. These metalenses fit into a compact, optical computing chip, shown here. Courtesy of Ilya Chugunov/Princeton University.
Heide and his students at Princeton provided the design for the optical chip-based camera prototype. Majumdar helped engineer the camera, and he and his students fabricated the chip in the Washington Nanofabrication Laboratory.
Majumdar and Heide said that they intend to continue their collaboration and are planning further iterations of the prototype to make it more relevant for autonomous navigation in self-driving vehicles. They also plan to work with more complex data sets and problems that require greater computing power to solve, such as object detection (i.e., locating specific objects within an image).
“Right now, this optical computing system is a research prototype, and it works for one particular application,” Majumdar said. “However, we see it eventually becoming broadly applicable to many technologies. That, of course, remains to be seen, but here, we demonstrated the first step. And it is a big step forward compared to all other existing optical implementations of neural networks.”
The nanophotonic processor and compact optical camera could strengthen recognition technology in optical neural networks, bolstering their capacity for deep learning.
“There are really broad applications for this research, from self-driving cars, self-driving trucks, and other robotics to medical devices and smartphones,” Heide said. “This work is still at a very early stage, but all of these applications could someday benefit from what we are developing.”
The research was published in
Science Advances (
www.science.org/doi/10.1126/sciadv.adp0391).
LATEST NEWS
- Hamamatsu Unveils Photonics Innovation Awards: Week in Brief: 2/14/25
Feb 14, 2025
- Cailabs, DataPath Partner to Deploy Transportable SATCOM Terminals
Feb 14, 2025
- Nanophotonic Processor with Optical Camera Could Improve AI Efficiency
Feb 14, 2025
- LightPath Acquires G5 Infrared
Feb 13, 2025
- QuEra Computing Completes $230M Financing Round
Feb 13, 2025
- Photonics’ “Semiconductorization” Stands Out Among Trends at Photonics West 2025
Feb 13, 2025
- Robust, Low-Cost Laparoscope Could Improve Surgical Outcomes Worldwide
Feb 12, 2025
- ANELLO Adds Advisor for Defense Applications: People in the News: 2/12/25
Feb 12, 2025