Stanford University researchers have developed an approach to improve the image quality and contrast of holographic displays. The technology, as a result, may help to improve near-eye displays for virtual and augmented reality applications. The approach, called Michelson holography, combines an optical setup inspired by Michelson interferometry with a recent software development to produce the interference patterns necessary to generate digital holograms. In holographic displays, optical components known as phase-only spatial light modulators (SLMs) curb image quality. SLMs function to create the diffracted light that makes the interference pattern necessary for visible 3D images. The problem is that SLMs used for holography tend to exhibit a low diffraction efficiency that significantly reduces image quality and, particularly, contrast. Michelson holography shows significant improvements in image quality, contrast, and speckle reduction compared with other conventional methods, such as Naïve SGD, shown left. Courtesy of Jonghyun Kim, NVIDIA/Stanford University. “Although we’ve recently seen tremendous progress in machine-learning-driven computer-generated holography, these algorithms are fundamentally limited by the underlying hardware,” said Jonghyun Kim, a research team member from NVIDIA and Stanford. “We co-designed a new hardware configuration and a new algorithm to overcome some of these limitations and demonstrate state-of-the-art results.” Instead of attempting to increase the diffraction efficiency of SLMs — a decidedly difficult task — the researchers decided to design an entirely new optical architecture. While most setups use only one phase-only SLM, the researchers’ approach uses two. “The core idea of Michelson holography is to destructively interfere with the diffracted light of one SLM using the undiffracted light of the other,” Kim said. “This allows the undiffracted light to contribute to forming the image rather than creating speckle and other artifacts.” The researchers paired the new setup with a camera-in-the-loop (CITL) optimization procedure modified specifically for their particular setup. CITL optimization is a computational method that can be used to optimize a hologram directly, or to train a computer model based on a neural network. The procedure enabled the researchers to use a camera to capture a series of displayed images, meaning that they could correct small misalignments of the optical system without the use of precise measuring devices. “Once the computer model is trained, it can be used to precisely figure out what a captured image would look like without physically capturing it,” Kim said. “This means that the entire optical setup can be simulated in the cloud to perform real-time inference of computationally heavy problems with parallel computing. This could be useful for calculating a computer-generated hologram for a complicated 3D scene, for example.” The system was tested in the lab on a benchtop optical setup where it was used to display multiple 2D and 3D holographic images that the researchers recorded with a conventional camera. In testing, the display provided significantly better image quality than existing computer-generated hologram approaches. The setup, however, is not yet practical for many settings; it would need to be miniaturized from the benchtop to something small enough for wearable augmented and virtual reality systems. The researchers note that the approach of co-designing the hardware and software may prove useful in the pursuit of improving other computational displays and computational imaging more broadly. The research was published in Optica (www.doi.org/10.1364/optica.410622).