Search
Menu
Excelitas Technologies Corp. - X-Cite Vitae LB 11/24

IR Imager Merges AI, Thermal Physics to See in the Dark

Facebook X LinkedIn Email
Heat-Assisted Detection and Ranging (HADAR), a patent-pending thermal imaging technology from Purdue University, combines infrared (IR) imaging, machine learning, and thermal physics to visualize target objects in the dark as if it were broad daylight. According to its developers, the technology could have an impact on par with lidar, sonar, and radar, by enabling fully passive, physics-aware machine perception.

Traditional sensors that emit signals, such as lidar, radar, and sonar, can encounter signal interference and risks to eye safety when they are scaled up. “Each of these agents will collect information about its surrounding scene through advanced sensors to make decisions without human intervention,” professor Zubin Jacob said. “However, simultaneous perception of the scene by numerous agents is fundamentally prohibitive.”

Video cameras designed to work in sunlight or with other sources of illumination are impractical for use in low-light conditions. Traditional thermal imaging can sense through darkness, inclement weather, and solar glare. However, ghosting — a thermal imaging effect that causes hazy images lacking in material specificity, depth, or texture — makes it difficult to use traditional thermal imaging for object detection.

“Objects and their environment constantly emit and scatter thermal radiation, leading to textureless images famously known as the ‘ghosting effect,’” researcher Fanglin Bao said. “Thermal pictures of a person’s face show only contours and some temperature contrast; there are no features, making it seem like you have seen a ghost. This loss of information, texture, and features is a roadblock for machine perception using heat radiation.”

Using a computational approach and machine learning, HADAR reconstructs the target’s temperature, emissivity, and texture (TeX), even in total darkness. The TeX information is shown together in the HSV color space to form the text view for the artificial intelligence (AI) model, which is called Textnet.

Textnet is a deep neural network designed to perform inverse text decomposition. When given a hyperspectral cube of data, Textnet decomposes it into three maps: a temperature map, an emissivity map for materials in the materials library, and thermal lighting factors. Textnet is trained with a physics-based data reconstruction loss, and it can be trained with direct supervision if the ground-truth TeX decomposition is available.

The biggest challenge for the researchers was the limited availability of high-quality training data. However, the physics-based loss function enabled them to compensate for limited data and provide effective learning for Textnet.

Excelitas PCO GmbH - PCO.Edge 11-24 BIO MR

By disentangling information within the cluttered heat signal, HADAR sees through pitch darkness as if it were broad daylight.

“HADAR vividly recovers the texture from the cluttered heat signal and accurately disentangles temperature, emissivity, and texture, or TeX, of all objects in a scene,” Bao said. “It sees texture and depth through the darkness as if it were day, and also perceives physical attributes beyond RGB, or red, green, and blue, visible imaging, or conventional thermal sensing.”

The team tested HADAR TeX vision using an off-road nighttime scene. HADAR ranging at night was found to outperform thermal ranging. In daylight, it showed an accuracy comparable with RGB. Automated HADAR thermography reached the Cramér-Rao bound on temperature accuracy, surpassing existing thermography techniques.

“HADAR TeX vision recovered textures and overcame the ghosting effect,” Bao said. “It recovered fine textures such as water ripples, bark wrinkles, and culverts, in addition to details about the grassy land.”

The researchers developed an estimation theory for HADAR and addressed photonic shot-noise limits depicting information-theoretic bounds to HADAR-based AI performance.

Additional enhancements to HADAR will include improving the size of the hardware and the data collection speed, the researchers said.

“The current sensor is large and heavy, since HADAR algorithms require many colors of invisible infrared radiation,” Bao said. “To apply it to self-driving cars or robots, we need to bring down the size and price while also making the cameras faster. The current sensor takes around one second to create one image, but for autonomous cars we need around 30- to 60-Hz frame rate, or frames per second.”

Initially, HADAR TeX vision will be used in automated vehicles and robots that interact with humans in complex environments. The technology could be further developed for agriculture, defense, geosciences, health care, and wildlife monitoring applications.

“Our work builds the information theoretic foundations of thermal perception to show that pitch darkness carries the same amount of information as broad daylight,” Jacob said. “Evolution has made human beings biased toward the daytime. Machine perception of the future will overcome this long-standing dichotomy between day and night.”

The research was published in Nature (www.doi.org/10.1038/s41586-023-06174-6).

Published: August 2023
Glossary
machine learning
Machine learning (ML) is a subset of artificial intelligence (AI) that focuses on the development of algorithms and statistical models that enable computers to improve their performance on a specific task through experience or training. Instead of being explicitly programmed to perform a task, a machine learning system learns from data and examples. The primary goal of machine learning is to develop models that can generalize patterns from data and make predictions or decisions without being...
artificial intelligence
The ability of a machine to perform certain complex functions normally associated with human intelligence, such as judgment, pattern recognition, understanding, learning, planning, and problem solving.
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
optoelectronics
Optoelectronics is a branch of electronics that focuses on the study and application of devices and systems that use light and its interactions with different materials. The term "optoelectronics" is a combination of "optics" and "electronics," reflecting the interdisciplinary nature of this field. Optoelectronic devices convert electrical signals into optical signals or vice versa, making them crucial in various technologies. Some key components and applications of optoelectronics include: ...
thermal imaging
Thermal imaging is a technology that detects infrared radiation (heat) emitted by objects and converts it into an image, known as a thermogram, which displays temperature variations in different colors. Unlike visible light imaging, thermal imaging does not require any ambient light and can be used in complete darkness or through obstructions such as smoke, fog, and certain materials. Thermal cameras use sensors to detect infrared radiation and generate images based on the temperature...
hyperspectral imaging
Hyperspectral imaging is an advanced imaging technique that captures and processes information from across the electromagnetic spectrum. Unlike traditional imaging systems that record only a few spectral bands (such as red, green, and blue in visible light), hyperspectral imaging collects data in numerous contiguous bands, covering a wide range of wavelengths. This extended spectral coverage enables detailed analysis and characterization of materials based on their spectral signatures. Key...
Research & TechnologyeducationAmericasPurdue Universityheat-assisted detection and rangingmachine learningSensors & Detectorsartificial intelligenceautomationImagingLight SourcesMaterialsOpticsdeep learningmachine visionneural networksroboticsoptoelectronicsinfrared imagingthermal imaginghyperspectral imagingautomotiveagriculturedefenseenvironmentBiophotonicsobject detectioninfrared camerasTechnology News

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.