3D imaging has taken off, with applications in cellphones, cars, robots, and more. Now, companies and researchers are looking to what’s next: 4D. While 3D imaging captures information about width, height, and depth (x, y, and z), 4D imaging adds another dimension and incorporates time, velocity, or light-matter interactions. Going 4D makes economic sense; the pulses that leave a laser system are expensive. With each laser pulse, many photons are sent out, but only a fraction return to be detected. But more return than are needed for 3D imaging. These extra photons could provide additional information. “You’ve already invested energy in the laser pulse. So, not recording it is leading to an inefficient system,” said George Williams, president of Voxtel Inc., a Beaverton, Ore.-based lidar maker. Voxtel’s offerings illustrate how a maximization of pulses can be accomplished. The company’s lidar products operate in the near-IR at 905 nm and in the shortwave IR at 1550 nm. As is the case with many lidar systems, Voxtel’s systems extract 3D imaging by measuring angle-angle-range information of the returning pulse. The angle-angle vector defines width and height (x and y), while pulse time of flight (TOF) determines distance (z). To these measurements, Voxtel adds intensity information. The intensity of the returning signal is set, to a degree, by the reflectivity of the object being imaged. For instance, a car’s metal body reflects significantly more light than its windows do. White lane markings on a road return more photons from a pulse than the surrounding black asphalt does. Intensity differences in the reflectivity-driven returning pulses could be used to detect colors or even read lettering on a sign. Figure 1. With 4D imaging, a camera can capture a scene (a) and determine both distance (b) and velocity (c) at the same time. Courtesy of SiLC Technologies. Williams said he has recently seen increasing interest from autonomous vehicle makers in this form of intensity-based 4D imaging. Sensors and systems can use the extra information from intensity, thereby improving imaging for tasks such as autonomous vehicle navigation. When it comes to the photonics, eye-safety requirements limit outgoing optical pulse energy in the near-IR, so inexpensive silicon detectors with limited sensitivity are used. These detectors — in combination with reduced pulse power — make it difficult to obtain the high-quality data needed for 4D imaging. At 1550 nm, where pulses of much higher power can be used while satisfying eye-safety requirements, indium gallium arsenide detectors rule. For many applications, however, the cost of the detectors must come down, Williams said. Figure 2. A 4D image (HSI point cloud) created by merging lidar and hyperspectral data. A laser (red line, upper left) generates high-spatial-resolution imaging at a single wavelength. This data is merged with a low-spatial-resolution, high-spectral-resolution hyperspectral image extracted from ambient light (yellow line, upper right) reflecting off objects. The combined high-spectral-resolution, high-spatial-resolution HSI point cloud enhances remote sensing. Courtesy of Maximilian Brell/Helmholtz Centre Potsdam. While TOF is the basis for most lidar, other approaches can enable 4D imaging. For instance, startup SiLC Technologies Inc. of Monrovia, Calif., offers a chip-integrated frequency-modulated continuous-wave (FMCW) lidar. These chips send out a frequency-modulated continuous wave of coherent photons. The photons bounce off an object, return, and then are detected. Using interference, the lidar chip determines the frequency shift in the returning signal and, from that, derives the distance of the object and its velocity, yielding a 4D image. According to Ralf Muenster, SiLC’s vice president of marketing and business development, frequency modulation offers advantages over TOF. Frequency modulation is largely immune to ambient light and other lidars operating nearby, he said, which is typically not the case with TOF techniques. And peak power can be up to 1000× lower than with TOF methods, so less is demanded of the laser. But TOF has been used more often because frequency modulation has been more expensive to implement. SiLC overcame this issue by putting everything on a single piece of silicon. “We integrated all of the optical functions to do a full-fledged coherent lidar system. So, basically, [the chip includes] the light source, the laser, the detectors, as well as all of the photonic circuitry to do the optical mixing,” Muenster said. Figure 3. Self-assembled microlenses for 4D imaging. Objects at various distances, represented by butterflies (a), polarize light in different orientations. Microlenses can provide both 3D and polarization information at the same time by determining which image in the array is the clearest. Details of the array (b) and of how information is extracted (c). Courtesy of Yan-qing Lu and Wei Hu/Nanjing University. Adapted with permission from Reference 2/American Chemistry Society. For detectors, SiLC’s chip incorporates germanium into the silicon substrate. The laser subsystem uses indium phosphide flip-chipped onto the silicon as a gain medium for a 1550-nm output. This wavelength is eye safe, allowing the company’s products to run at a higher average power and thereby extend the range for sensing low-reflectivity objects beyond 200 m. In addition to measuring the velocity and reflectivity of an object, SiLC lidar can also determine polarization, Muenster said. The lidar’s outgoing pulses are linearly polarized and it captures the polarization of the returning photons, thereby enabling the extraction of more information about the object. SiLC is working with strategic partners to deploy its chips in products for automotive and outdoor applications. The chips could also be used for facial recognition and in biometrics, or in military applications. According to Muenster, although their current chip has a 1550-nm output, an array of chips could be assembled, each with different resonant cavity geometries and hence different center wavelengths for each laser. “The chip is tiny, so you can have multiple channels,” he said. “There are a lot of things you can do to get multiple wavelengths.” Another way to achieve 4D imaging is by using various wavelengths to interrogate objects and tie that information to specific points. While SiLC has yet to build a device with these capabilities, researchers have demonstrated how this could be accomplished. One method involves using 3D imaging to combine hyperspectral data, or pixel-by-pixel readings, across the electromagnetic spectrum. An example of this approach appeared in a March 2019 paper1 by a team from Helmholtz Centre Potsdam and the University of Potsdam in Germany. According to Maximilian Brell, a research associate at Helmholtz Centre Potsdam and the paper’s lead author, high spectral resolution in hyperspectral imaging (HSI) always comes at the expense of spatial discrimination. So the team merged HSI with distinctly different lidar scanning to enhance remote sensing. The hyperspectral sensor resolution, Brell said, “can be sharpened with the laser system, which has high spatial resolution in one single channel, and a meaningful spectral signal can be assigned to each lidar point.” The lidar scanner used a 1550-nm laser to determine xyz information. The hyperspectral imager consisted of two cameras, one covering the visible and the NIR from 400 to 1000 nm, and the other covering the SWIR from 1000 to 2500 nm. The researchers mounted both instruments on an airplane and flew the package over terrain, collecting information. Using software, the team matched the two data sets. Due to the significantly higher spatial resolution of the lidar data, the researchers assigned an unmixed hyperspectral signal to several lidar spots that fell within each measured hyperspectral data point. This approach requires significant computational power and memory, perhaps three or four times as much as is needed for lidar and HSI alone, Brell said. The increase in computation is set by the ratio between the hyperspectral spatial resolution and the lidar point density. He said improvements to software and algorithms could make the calculations more efficient, reducing the computational burden. The payoff for this type of 4D imaging shows up in remote sensing of terrain that has a lot of variation. According to Brell, good potential applications include city mapping, forest monitoring, and environmental monitoring. Urban settings, for example, have structures, vegetation, roadways, and more. Fusing HSI and 3D imaging can reveal details such as the state of vegetation or other aspects of the landscape. However, the sensor combination is poorly suited to detecting small moving objects, such as animals found in a city’s green spaces. An added advantage is that the system can compare the lidar return at 1550 nm with the hyperspectral readings, which allows the readings to be adjusted to provide a true surface reflectance measurement in various lighting conditions. This reflectance information is independent of ambient lighting and therefore brings a considerable advantage, Brell said. Finally, while many 4D imaging approaches involve some form of lidar, imaging in more than three dimensions doesn’t require a laser. In a multi- institutional November 2019 paper2, Chinese researchers reported on an approach that used liquid crystal microlenses to capture 3D images of objects, as well as polarization information about light interacting with those objects. Normally, this type of 4D imaging would require expensive equipment and sophisticated operations, said Yan-qing Lu, a professor of engineering and applied science at Nanjing University in Nanjing, China, and a co-author of the paper. Instead, the researchers achieved the imaging using patterned arrays of liquid crystals arranged in concentric circles. They created microlenses whose size, and therefore focal length, increased in each concentric circle from the center of the array to the edge. Liquid crystal microlenses have a polarization dependency, with the clearest image occurring when the polarization of the light aligns with the microlens orientation. The investigators took advantage of this by varying orientations around the concentric rings. With this arrangement, the researchers determined the distance to an object and the polarization of the light coming from it by taking a snapshot of the object through the array and finding the clearest resulting image. In a proof of principle, the researchers imaged various millimeter-size objects while simultaneously achieving a polarization resolution of ~3°. This fabrication technique is fairly simple and, therefore, could be performed in high volume, according to Lu. When asked about potential applications, he listed remote sensing and communications as two possibilities. He added, however, that further research and development is needed. For instance, the resolution of microlenses is usually lower than that of commercial imaging systems, and this deficit poses a potential problem for applications. But material characteristics may help overcome this drawback, according to Wei Hu, an optical engineering professor at Nanjing University and co-author of the paper. “Thanks to the responsivity of liquid crystals, the phase profiles of superstructures can be precisely tuned by applying an external field, such as electric and light fields, which will facilitate improving the resolution,” he said. As this work and the other examples show, 4D imaging is under active development. The efforts are bringing such imaging — and the applications it may enable — into clearer focus. References 1. M. Brell et al. (2019). 3D hyperspectral point cloud generation: fusing airborne laser scanning and hyperspectral imaging sensors for improved object-based information extraction. ISPRS J Photogramm, Vol. 149, pp. 200-214, www.doi.org/10.1016/j.isprsjprs.2019.01.022. 2. L.-L. Ma et al. (2019). Self-assembled asymmetric microlenses for four- dimensional visual imaging. ACS Nano, Vol. 13, Issue 12, pp. 13709-13715, www.doi.org/10.1021/acsnano.9b07104.