A New Use for Deep Learning — Hologram Reconstruction
Researchers have used a deep learning-based, computational approach to reconstruct a hologram and form a microscopic image of an object. This deep learning-based technique rapidly eliminates twin-image and self-interference-related artifacts using only one hologram intensity. It also uses fewer measurements to reconstruct improved phase and amplitude images of the objects.
According to the research team, compared to existing holographic phase recovery approaches, this neural network framework is significantly faster to compute and could provide a new framework in holographic image reconstruction.
The team from University of California at Los Angeles (UCLA) validated its method by reconstructing the phase and amplitude images of various samples, including blood and Pap smears and tissue sections. The holograms all demonstrated successful elimination of spatial artifacts — once trained, the neural network had learned how to extract and separate the spatial features of the true image of the object from undesired light interference and related artifacts.
Researchers achieved hologram recovery without any modeling of light-matter interaction or a solution to the wave equation.
According to researchers, the results are applicable to any phase recovery and holographic imaging problem. Professor Aydogan Ozcan (UCLA and Howard Hughes Medical Institute) said that the deep learning-based framework could open up a myriad of opportunities to design new coherent imaging systems spanning different parts of the electromagnetic spectrum, including visible wavelengths and the x-ray regime.
The research was published in
Light: Science & Applications (
doi:10.1038/lsa.2017.141).
LATEST NEWS