Search
Menu
Meadowlark Optics - Wave Plates 6/24 LB 2024

AI Aids in Real-Time Generation of 3D Holograms

Facebook X LinkedIn Email
A method for generating holograms uses an artificial intelligence program that a consumer-grade laptop is capable of running, giving implications in VR and 3D printing. A team at MIT introduced the method, which generates holograms almost instantly.

The process of generating holograms via computer typically necessitates a supercomputer device to run the necessary physics simulations. The process is slow even on a supercomputer and often delivers subpar results. By comparison, the new method enables a consumer-grade computer to generate real-time 3D holographic images in milliseconds.
This figure shows the experimental demonstration of 2D and 3D holographic projection. The left photograph is focused on the mouse toy (in yellow box) closer to the camera, and the right photograph is focused on the perpetual desk calendar (in blue box). Courtesy of Liang Shi, Wojciech Matusik, et al.
This figure shows the experimental demonstration of 2D and 3D holographic projection. The left photograph is focused on the mouse toy (in yellow box) closer to the camera, and the right photograph is focused on the perpetual desk calendar (in blue box). Courtesy of Liang Shi, Wojciech Matusik, et al.

“People previously thought that with existing consumer-grade hardware it was impossible to do real-time 3D holography computations,” said lead author Liang Shi, a Ph.D. student in MIT’s Department of Electrical Engineering and Computer Science. “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.”

Shi believes the new approach, “tensor holography,” will bring the goal within reach.

Ultimately, the difference between a photograph and a hologram lies in the hologram’s encoding of the brightness and phase of each lightwave. This allows a hologram to portray a more life-like representation of a scene’s parallax and depth. To optically capture a hologram, a laser beam is split, with half used to illuminate the subject and the other half used as a reference for the lightwaves’ phase. The reference generates a sense of depth. These holograms, however, which were developed in the mid-20th century, were static and therefore unable to capture motion. And the method only produced one hard copy.

Computer-generated holography is designed to bypass these challenges by simulating the optical setup. Because each point in the scene is of a different depth, though, one cannot apply common operations for each point.

PFG Precision Optics - Precision Optics 12/24 MR

“That increases the complexity significantly,” Shi said.

A supercomputer running these simulations could take up to several minutes to generate a single holographic image. Existing algorithms also do not model occlusion with photorealistic precision.

The MIT team used deep learning and designed a convolutional network in a method that uses a series of tensors to mimic the way humans process visual information. Training a neural network typically requires a large, high-quality data set, which the team had to assemble on its own.

The custom database contained 4000 pairs of computer-generated images, each matching a picture — including color and depth information for each pixel — with its corresponding hologram. The researchers created the database holograms with scenes that included complex and varying shapes and colors, with an evenly distributed depth of pixels from the background to the fore.

To address occlusion, they also provided a new set of physics-based calculations.

The algorithm, with a photorealistic training data set, optimized its own calculations, successfully enhancing its ability to generate holograms. The network operated orders of magnitude faster than physics-based calculations.

The method is able to generate holograms in milliseconds from images with depth information — provided by typical computer generated images and can be calculated with a multicamera setup or a lidar sensor. The compact tensor network requires less than 1 MB of memory.

“It’s negligible, considering the tens and hundreds of gigabytes available on the latest cellphone,” researcher Wojciech Matusik said.

In VR, the team believes the technology could provide more realistic scenery and eliminate eyestrain and other side effects of long-term VR use. The technology could also see use in displays capable of modulating the phase of lightwaves.

“It’s a considerable leap that could completely change people’s attitudes toward holography,” Matusik said. “We feel like neural networks were born for this task.”

The work was published in Nature (www.doi.org/10.1038/s41586-020-03152-0).

 


Published: March 2021
Glossary
lidar
Lidar, short for light detection and ranging, is a remote sensing technology that uses laser light to measure distances and generate precise, three-dimensional information about the shape and characteristics of objects and surfaces. Lidar systems typically consist of a laser scanner, a GPS receiver, and an inertial measurement unit (IMU), all integrated into a single system. Here is how lidar works: Laser emission: A laser emits laser pulses, often in the form of rapid and repetitive laser...
artificial intelligence
The ability of a machine to perform certain complex functions normally associated with human intelligence, such as judgment, pattern recognition, understanding, learning, planning, and problem solving.
holography
Holography is a technique used to capture and reconstruct three-dimensional images using the principles of interference and diffraction of light. Unlike conventional photography, which records only the intensity of light, holography records both the intensity and phase information of light waves scattered from an object. This allows the faithful reproduction of the object's three-dimensional structure, including its depth, shape, and texture. The process of holography typically involves the...
hologram
An interference pattern that is recorded on a high-resolution plate, the two interfering beams formed by a coherent beam from a laser and light scattered by an object. If after processing, the plate is viewed correctly by monochromatic light, a three-dimensional image of the object is seen.
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
computer-generated hologram
A computer-generated hologram (CGH) is a holographic image produced using computational methods and algorithms, rather than traditional optical techniques. CGHs are generated entirely in digital form using computers and are often displayed on devices such as liquid crystal displays (LCDs) or digital micro-mirror devices (DMDs). The process of creating a computer-generated hologram typically involves the following steps: Scene representation: The scene or object to be holographically...
Research & Technologylidarartificial intelligenceholographyhologramdeep learningcomputer-generated hologramcomputer-generated hologramscomputer-generated holographyMITLiang ShiWojciech MatusikNatureAmericasTech Pulse

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.