Researchers at Fudan University have developed an ionic-electronic photodetector capable of simultaneously detecting light and performing in-sensor image processing. According to the researchers, the device offers the potential to surpass certain limitations of human vision — including color vision deficiencies. At the core of the innovation is a layered photodetector based on copper indium phosphorous sulfide (CIPS), a van der Waals ferroelectric material that supports both ionic and electronic conduction. By leveraging the motion of mobile CU+ ions, the device exhibits nonlinear and history-dependent photoresponses, enabling it to dynamically tune its sensitivity to light. This reconfigurability allows the device to selectively enhance weak signals or suppress overexposed regions — functionality that may help address perceptual gaps in human vision, such as poor contrast adaptation or color discrimination. In-sensor image transformations using the copper indium phosphorous sulfide photodetector developed by Fudan University researchers. Courtesy of Fudan University. “This is a step toward in-sensor computing—a paradigm where part of the computation is physically embedded within the sensor,” said lead author Hai Huang. “Instead of simply converting light into electrical signals, our device can process information as it captures it. This not only reduces power consumption but also enables fast, adaptive vision responses.” One promising future direction for the technology lies in its potential to assist people with color vision deficiencies. The device’s ability to reweight spectral contrast and modulate responsivity in real time could enable adaptive visual preprocessing, for example, enhancing color contrast between red and green. The work opens the possibility for future development of chip-based visual aids or prosthetic components that enhance color separability and object recognition in complex visual scenes. The photodetector can also perform basic image operations such as noise removal, contrast enhancement, and image inversion filtering—all in situ, within the device structure. Since these functions are carried out without external circuitry, the design avoids the data bottlenecks and energy overhead associated with conventional image sensors and processors, making it ideal for low-power edge AI applications. The team further demonstrated that the detector exhibits programmable photoresponse behaviors that vary based on light exposure history—such as switching between positive and negative response modes depending on the illumination environment. These dynamics are reminiscent of biological visual adaptation, but the artificial system offers superior tunability and speed. “The future of artificial vision isn't just about copying biology—it's about pushing beyond it,” said Huang. “With ionic-electronic materials, we can embed intelligence at the material level. This opens up possibilities for real-time adaptation, including potential benefits for people with impaired vision.” Looking ahead, the researchers plan to scale the technology toward two-dimensional sensor arrays and explore its integration into neuromorphic imaging systems. Though vision enhancement for colorblindness remains a long-term goal, this work lays the foundation for a new class of smart pixels that blur the line between sensing and thinking—offering not only better machines, but potentially better vision itself. The research was published in Nature Communications (www.doi.org/10.1038/s41467-025-62563-7).