Heat-Assisted Detection and Ranging (HADAR), a patent-pending thermal imaging technology from Purdue University, combines infrared (IR) imaging, machine learning, and thermal physics to visualize target objects in the dark as if it were broad daylight. According to its developers, the technology could have an impact on par with lidar, sonar, and radar, by enabling fully passive, physics-aware machine perception. Traditional sensors that emit signals, such as lidar, radar, and sonar, can encounter signal interference and risks to eye safety when they are scaled up. “Each of these agents will collect information about its surrounding scene through advanced sensors to make decisions without human intervention,” professor Zubin Jacob said. “However, simultaneous perception of the scene by numerous agents is fundamentally prohibitive.” Video cameras designed to work in sunlight or with other sources of illumination are impractical for use in low-light conditions. Traditional thermal imaging can sense through darkness, inclement weather, and solar glare. However, ghosting — a thermal imaging effect that causes hazy images lacking in material specificity, depth, or texture — makes it difficult to use traditional thermal imaging for object detection. “Objects and their environment constantly emit and scatter thermal radiation, leading to textureless images famously known as the ‘ghosting effect,’” researcher Fanglin Bao said. “Thermal pictures of a person’s face show only contours and some temperature contrast; there are no features, making it seem like you have seen a ghost. This loss of information, texture, and features is a roadblock for machine perception using heat radiation.” Using a computational approach and machine learning, HADAR reconstructs the target’s temperature, emissivity, and texture (TeX), even in total darkness. The TeX information is shown together in the HSV color space to form the text view for the artificial intelligence (AI) model, which is called Textnet. Textnet is a deep neural network designed to perform inverse text decomposition. When given a hyperspectral cube of data, Textnet decomposes it into three maps: a temperature map, an emissivity map for materials in the materials library, and thermal lighting factors. Textnet is trained with a physics-based data reconstruction loss, and it can be trained with direct supervision if the ground-truth TeX decomposition is available. The biggest challenge for the researchers was the limited availability of high-quality training data. However, the physics-based loss function enabled them to compensate for limited data and provide effective learning for Textnet. By disentangling information within the cluttered heat signal, HADAR sees through pitch darkness as if it were broad daylight. “HADAR vividly recovers the texture from the cluttered heat signal and accurately disentangles temperature, emissivity, and texture, or TeX, of all objects in a scene,” Bao said. “It sees texture and depth through the darkness as if it were day, and also perceives physical attributes beyond RGB, or red, green, and blue, visible imaging, or conventional thermal sensing.” The team tested HADAR TeX vision using an off-road nighttime scene. HADAR ranging at night was found to outperform thermal ranging. In daylight, it showed an accuracy comparable with RGB. Automated HADAR thermography reached the Cramér-Rao bound on temperature accuracy, surpassing existing thermography techniques. “HADAR TeX vision recovered textures and overcame the ghosting effect,” Bao said. “It recovered fine textures such as water ripples, bark wrinkles, and culverts, in addition to details about the grassy land.” The researchers developed an estimation theory for HADAR and addressed photonic shot-noise limits depicting information-theoretic bounds to HADAR-based AI performance. Additional enhancements to HADAR will include improving the size of the hardware and the data collection speed, the researchers said. “The current sensor is large and heavy, since HADAR algorithms require many colors of invisible infrared radiation,” Bao said. “To apply it to self-driving cars or robots, we need to bring down the size and price while also making the cameras faster. The current sensor takes around one second to create one image, but for autonomous cars we need around 30- to 60-Hz frame rate, or frames per second.” Initially, HADAR TeX vision will be used in automated vehicles and robots that interact with humans in complex environments. The technology could be further developed for agriculture, defense, geosciences, health care, and wildlife monitoring applications. “Our work builds the information theoretic foundations of thermal perception to show that pitch darkness carries the same amount of information as broad daylight,” Jacob said. “Evolution has made human beings biased toward the daytime. Machine perception of the future will overcome this long-standing dichotomy between day and night.” The research was published in Nature (www.doi.org/10.1038/s41586-023-06174-6).