Using a 3D printer, a research team at the UCLA Samueli School of Engineering has created an artificial neural network that can analyze large volumes of data and identify objects at the speed of light. Called a diffractive deep neural network (D2NN), the technology uses the light scattering from an object to identify it. The technology is based on a deep learning-based design of passive diffractive layers that work collectively. The team created a computer-simulated design, then used a 3D printer to create thin, 8 cm-sq polymer wafers. Each wafer was created with uneven surfaces to help diffract light coming from an object. The network, composed of a series of polymer layers, works using light that travels through it. Each layer is 8 centimeters square. Courtesy of UCLA Samueli/Ozcan Research Group. Researchers used terahertz (THz) frequencies to penetrate the 3D-printed wafers. Each layer of a wafer was composed of tens of thousands of pixels through which light could travel. Each type of object is assigned a pixel; and the light coming from an object is diffracted toward the pixel that has been assigned to that object’s type. This allows the D2NN — which comprises a series of pixelated layers — to identify an object in the same amount of time it would take a computer to “see” the object. Researchers trained the network to learn the pattern of diffracted light each object produced as the light from that object passed through the device. The training used a branch of artificial intelligence called deep learning, in which machines learn through repetition and over time as patterns emerge. “This is intuitively like a very complex maze of glass and mirrors. The light enters a diffractive network and bounces around the maze until it exits. The system determines what the object is by where most of the light ends up exiting,” said UCLA professor Aydogan Ozcan. In experiments, researchers placed images in front of a THz light source. The D2NN viewed the images through optical diffraction. Researchers found that the device could accurately identify handwritten numbers and items of clothing — both of which are commonly used in artificial intelligence studies. Schematic showing how the device identifies printed text. Courtesy of UCLA Samueli / Ozcan Research Group. Researchers also trained the device to act as an imaging lens — much like how a typical camera lens works. Because its components can be created by a 3D printer, the D2NN can be made with larger and additional layers, resulting in a device with hundreds of millions of artificial neurons (i.e., pixels). Those bigger devices could identify many more objects at the same time and/or perform more complex data analysis. The components for the D2NN can be made inexpensively. Researchers said the device they created could be reproduced for less than $50 USD. While this study used light in the THz spectrum, Ozcan said it would be possible to create neural networks that use visible, IR or other frequencies. A D2NN could also be made using lithography or other printing techniques, he said. The team believes that its device could find applications in all-optical image analysis, feature detection, and object classification, and could also enable new camera designs and optical components that performed tasks using D2NNs. For example, a driverless car using the technology could react instantaneously to a stop sign because it could read the sign as soon as it received the light diffracted from the sign.The technology could also be used to sort through large numbers of objects, such as millions of cell samples to search for signs of disease. “This work opens up fundamentally new opportunities to use an artificial intelligence-based passive device to instantaneously analyze data, images and classify objects,” said Ozcan. “This optical artificial neural network device is intuitively modeled on how the brain processes information. It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security or any application where image and video data are essential.” The research was published in Science (doi: 10.1126/science.aat8084).