A retinomorphic optical sensor, featuring an ultrathin perovskite semiconductor layering, has demonstrated the ability to perceive changes to its visual field in much the same way as the human eye. The technology relies on ultrathin layers of perovskite semiconductors that, when exposed to light, change from strong electric insulators to strong conductors. The sensor’s ability to mimic the eye makes it potentially amenable to the types of neuromorphic computers apt to power AI in self-driving cars and other advanced image recognition applications, said John Labram, a researcher at Oregon State University. As opposed to traditional computers that process information sequentially as a series of instructions, neuromorphic computers are designed to emulate the human brain’s parallel networks. “People have tried to replicate this in hardware and have been reasonably successful,” Labram said. “However, even though the algorithms and architecture designed to process information are becoming more and more like a human brain, the information these systems receive is still decidedly designed for traditional computers.” To put it differently, a computer designed to emulate the human brain would need an image sensor designed to see like a human eye. The human eye has about 100 million photoreceptors, though the optic nerve is connected to the brain via only 1 million connections. Consequently, a good deal of preprocessing and dynamic compression must take place in the retina before the image is transmitted to the brain. The optical circuitry of the eye gives greater priority to objects in motion, favoring signals from photoreceptors that detect a change in light intensity. Sensing technology such as the chips found in digital cameras and smartphones are suited for sequential processing, Labram said; images are scanned across a two-dimensional array of sensors, pixel by pixel, at a set frequency. Each sensor generates a signal with an amplitude that varies directly with the intensity of the light it receives, meaning a static image will result in a more or less constant output voltage from the sensor. The retinomorphic sensor, on the other hand, stays relatively quiet under static conditions and registers a short and intense signal when it senses changes to illumination, before reverting to its baseline state. These attributes correlate directly to the perovskite layers constructing the sensor. The perovskite is applied in ultrathin layers, only a few hundred nanometers thick, and functions largely as a capacitor that varies its capacitance under illumination. Capacitors store energy in an electrical field. “The way we test is, basically, we leave it in the dark for a second. Then we turn the lights on and just leave them on,” Labram said. “As soon as the light goes on, you get this big voltage spike. Then the voltage quickly decays, even though the intensity of the light is constant. And that’s what we want.” To overcome the inability to test multiple sensors simultaneously, Labram’s team measured several devices to develop a numerical model that mimicked sensor behavior, allowing the researchers to simulate an array of retinomorphic sensors and estimate how a retinomorphic video camera would respond to input stimulus. “We can convert video to a set of light intensities and then put that into our simulation,” Labram said. “Regions where a higher voltage output is predicted from the sensor light up, while the lower voltage regions remain dark. If the camera is relatively static, you can clearly see all the things that are moving respond strongly. This stays reasonably true to the paradigm of optical sensing in mammals.” In testing, the researchers fed the simulated sensor array footage of a baseball practice. As expected, players in the infield show up as bright, visible, moving objects whereas static objects such as the diamond, bleachers, and even outfielders faded into the darkness. A different test used footage of a bird in flight. The bird flew into view before disappearing as it stopped at a bird feeder, which was not visible. The bird reappeared as it flew away, putting the bird feeder into motion as it took off and bringing it into visibility. “The good thing is that with this simulation we can input any video into one of these arrays and process that information in essentially the same way the human eye would,” Labram said. “For example, you can imagine these sensors being used by a robot tracking the motion of objects. Anything static in its field of view would not elicit a response, but a moving object would be registering a high voltage. This would tell the robot immediately where the object was without any complex image processing.” The research was published in Applied Physics Letters (www.doi.org/10.1063/5.0030097).