Holographic Technique Uses Deep Learning to Increase Accuracy, Improve Microscopy
Deep learning, one of the key technologies behind advances in real-time speech recognition and automated image and video labeling, is being used to reconstruct holograms to form microscopic images of samples. Researchers are using a convolutional neural network-based method that is trained through deep learning to rapidly perform phase recovery and holographic image reconstruction.
An artificial neural network is used to transform low-resolution microscopic images of samples into high-resolution images, revealing more details of the sample, which could be crucial for pathology and medical diagnostics. Courtesy of Ozcan Research Group/UCLA.
According to researchers, their holographic imaging technique, which uses only one hologram, produces better images than existing methods that use multiple holograms, and is easier to implement because it requires fewer measurements. The process is very fast, requiring approximately 3.11 seconds on a graphics processing unit (GPU)-based laptop computer to recover the phase and amplitude images of a specimen over a field of view of 1 millimeter with approximately 7.3 megapixels in each image channel.
The first step in the deep-learning-based phase retrieval and holographic image reconstruction framework consists of "training" the neural network. This training involves learning the statistical transformation between a complex-valued image that results from the back-propagation of a single intensity-only hologram of the object and the same object’s image that is reconstructed using a multi-height phase retrieval algorithm (treated as the "gold standard" for the training phase). This training/learning process, which is performed only once, results in a fixed deep neural network that is used to blindly reconstruct the phase and amplitude images of any object, free from twin-image and other undesired interference-related artifacts, using a single hologram intensity.
UCLA researchers validated their technique by reconstructing the phase and amplitude images of three different types of samples, including blood and Pap smears and breast tissue sections. They separately trained three convolutional neural networks for each sample type. In each case, the neural network learned to extract and separate the features of the true image of the object from light interference and from other physical byproducts of the image reconstruction process.
“These results are broadly applicable to any phase recovery and holographic imaging problem, and this deep-learning-based framework opens up myriad opportunities to design fundamentally new coherent imaging systems, spanning different parts of the electromagnetic spectrum, including visible wavelengths and even x-rays,” said Aydogan Ozcan, associate director of the UCLA California NanoSystems Institute and professor at the Howard Hughes Medical Institute.
Because the holographic imaging technique was developed without any modeling of light-matter interaction or wave equation, there is no need to model or perform calculations for each individual sample. The physics of light-matter interaction and holographic imaging are statistically inferred through deep learning in the convolutional network, using a large number of microscopic images as the "gold standard" in the training phase.
The experimental results indicate that challenging problems in imaging science could be overcome through machine learning, providing new avenues to design powerful computational imaging systems.
“This is an exciting achievement since traditional physics-based hologram reconstruction methods have been replaced by a deep-learning-based computational approach,” researcher Yair Rivenson said.
In a second, separate study, published in the journal
Optica, the researchers used the same deep-learning framework to improve the resolution and quality of optical microscopic images, demonstrating that a deep neural network could be used to enhance the spatial resolution of optical microscopy over a large field of view and depth of field.
Such an advance could help diagnosticians and pathologists identify extremely small-scale abnormalities in a large blood or tissue sample. Ozcan said it was an example of how deep learning techniques could be used to improve optical microscopy for medical diagnostics and other fields in engineering and the sciences.
The research on holographic imaging has been accepted for publication in
Light: Science & Applications (
doi: 10.1038/lsa.2017.141).
The research on deep learning microscopy has been published by
Optica, a publication of OSA, The Optical Society (
doi: 10.1364/OPTICA.4.001437).
LATEST NEWS