Search
Menu
Sheetak -  Cooling at your Fingertip 11/24 LB

AI Tool Uses Aerial Imaging to Scan Structures for Wildfire Damage

Facebook X LinkedIn Email
The DamageMap application identifies buildings as damaged in red or not damaged in green. Researchers developed the platform to provide immediate information about structural damage following wildfires. Courtesy of Galanis et al.
The DamageMap application identifies buildings as damaged in red or not damaged in green. Researchers developed the platform to provide immediate information about structural damage following wildfires. Courtesy of Galanis et al.
A system developed by researchers from Stanford University and California Polytechnic State University (Cal Poly) called DamageMap uses aerial photography and artificial intelligence to assess wildfire damage to buildings. Rather than comparing before and after photos, the system is able to use machine learning to consider only post-fire images.

The current method of assessing damage involves people going door-to-door to check every building. Though the Stanford system is not intended to replace in-person damage classification, it may find use as a scalable supplementary tool by offering immediate results and providing the exact location of buildings identified.

“We wanted to automate the process and make it much faster for first responders or even for citizens that might want to know what happened to their house after a wildfire,” said lead author and graduate student Marios Galanis. “Our model results are on par with human accuracy.”

The team tested the technique using a variety of satellite aerial and drone photography, with results of at least 92% accuracy.

“With this application, you could probably scan the whole town of Paradise in a few hours,” said senior author G. Andrew Fricker, an assistant professor at Cal Poly, referencing the California town destroyed by the 2018 Camp Fire. “I hope this can bring more information to the decision-making process for firefighters and emergency responders, and also assist fire victims by getting information to help them file insurance claims and get their lives back on track.”

Most computational systems cannot efficiently classify building damage because the AI compares post-disaster photos with pre-disaster images that must use the same satellite, camera angle, and lighting conditions, which can be expensive to obtain or unavailable. Current hardware is unable to record high-resolution surveillance daily, so the systems are unable to provide consistent photos, the researchers said.

Excelitas PCO GmbH - Industrial Camera 11-24 VS MR

From the visual data the system obtains, DamageMap analyzes post-wildfire images to identify damage through features such as blackened surfaces, crumbled roofs, and the absence of structures.

“People can tell if a building is damaged or not — we don’t need the before picture — so we tested that hypothesis with machine learning,” said co-author and graduate student Krishna Rao. “This can be a powerful tool for rapidly assessing damage and planning disaster recovery efforts.”

Because the team used supervised learning to train its system, the system can be further improved by feeding it more data. The scientists tested the application using damage assessments from Paradise, Calif., after the Camp Fire, and the Whiskeytown-Shasta-Trinity National Recreation Area after the Carr Fire of 2018.

The researchers said the open-source platform can be applied to any area prone to wildfires, and they hope it could also be trained to classify damages from other disasters, such as floods or hurricanes.

“So far our results suggest that this can be generalized, and if people are interested in using it in real cases, then we can keep improving it,” Galanis said.

Galanis and Rao developed the project during Stanford’s 2020 “Big Earth Hackathon: Wildland Fire Challenge.” They then collaborated with Cal Poly researchers to refine the platform.

The research was published in the International Journal of Disaster Risk Reduction (www.doi.org/10.1016/j.ijdrr.2021.102540).

Published: September 2021
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
Extreme EnvironmentAIResearch & TechnologyImagingwildfireCalifornia wildfiressurveyingmachine visioncamerasAmericasdamage assessmentimage classificationimaging natural disastersStanfordCal Polylocation recognition systemThe News Wire

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.