Depth sensing plays a crucial role in various applications, including robotics, augmented reality, and autonomous driving. Monocular passive depth sensing techniques are valued for their cost-effectiveness and compact design which offers an alternative to the expensive and bulky active depth sensors and stereo vision systems. While light-field cameras can address the defocus ambiguity inherent in 2D cameras and achieve unambiguous depth perception, they compromise the spatial resolution and often struggle with the effect of optical aberration. Researchers led by Hui Qiao of the Institute for Brain and Cognitive Sciences and Department of Automation at Tsinghua University have presented a compact meta-imaging camera and an analytical framework for the quantification of monocular depth sensing precision by calculating the Cramér-Rao lower bound of depth estimation. Quantitative evaluations reveal that the meta-imaging camera exhibits not only higher precision over a broader depth range than the light-field camera but also superior robustness against changes in signal-background ratio. (a) The meta-imaging camera integrates microlens array, CMOS sensor, and piezo stage together. (b) Different views of the meta-imaging camera’s 4D point spread function. (c) A 2D camera and a meta-imaging camera, both possessing identical optical parameters. (d) A board with “H”. (e) Experimental board images at different distances of the meta-imaging camera. Set the meta-imaging camera focus at 2.48 m. (f) Estimated depth of the board using the deconvolution and point spread function model. The “geo error” refers to theoretical depth estimation error based on geometry optics. Courtesy of Cao, Z., Li, N., Zhu, L. et al., Tsinghua University. Moreover, both the simulation and experimental results demonstrate that the meta-imaging camera maintains the capability of providing precise depth information even in the presence of aberrations. Showing promising compatibility with other point spread function engineering methods, they anticipate that the meta-imaging camera may facilitate the advancement of monocular passive depth sensing in various applications. The meta-imaging camera integrates the main lens, microlens array, CMOS sensor, and piezo stage. By incorporating a scanning mechanism, the meta-imaging camera can overcome the trade-off between spatial and angular resolution and achieve multisite aberration correction through digital adaptive optics techniques. Hence, it can optically capture depth information even in the presence of aberrations, ensuring accurate and robust depth sensing. The work presents a compact meta-imaging camera and an analytical framework for the quantification of monocular depth sensing precision. The results showed that the meta-imaging camera outperforms the traditional light-field camera, exhibiting superior depth sensing capabilities and enhanced robustness against changes in signal-background ratio. Simulation and experimental depth estimation results further confirm the robustness and high precision of meta-imaging cameras in challenging conditions caused by optical aberrations. Further, the meta-imaging camera complements rather than contradicts stereo vision. It can enhance the depth sensing performance when replacing 2D cameras with meta-imaging cameras in current stereo vision systems. The researchers believe the technique could significantly expand the utility of passive depth sensing in challenging scenarios such as autonomous driving, unmanned drones, and robotics, where accurate and robust depth sensing is crucial. Additionally, this breakthrough opens new avenues for future advancements in long-range passive depth sensing, overcoming the limitations previously imposed by optical aberrations. The research was published in Light: Science and Applications (www.doi.org/10.1038/s41377-024-01609-9).