Search
Menu
Sheetak -  Cooling at your Fingertip 11/24 LB

Vision Assesses Damaged Utility Poles

Facebook X LinkedIn Email
HANK HOGAN, CONTRIBUTING EDITOR

After a hurricane or tornado outbreak flattens buildings and destroys infrastructure, crews roll in to assess and repair the damage. Determining whether a wooden utility pole is gone may be difficult, given the fact that the assessment crew might be from out of the area, and there is no national map of utility pole locations. What’s more, windstorms can wipe out landmarks and disable cellphone towers over a wide area, so making a rapid assessment requires covering a lot of ground quickly and reporting back over limited communication channels.

Researchers at Oak Ridge National Laboratory developed a solution using a drone along with machine vision, photogrammetry, machine learning, and edge processing technologies to conduct rapid assessments. This resulted in a means to determine the utility poles that need to be repaired and their location.

A drone equipped with a high-resolution camera and an onboard processer. This prototype demonstrated the ability to classify utility poles as damaged or undamaged after a natural disaster. Courtesy of Oak Ridge National Laboratory.

 
  A drone equipped with a high-resolution camera and an onboard processer. This prototype demonstrated the ability to classify utility poles as damaged or undamaged after a natural disaster. Courtesy of Oak Ridge National Laboratory.

“This capability provides beyond rapid line-of-sight airborne assessment of damaged utility poles through constrained comms and onboard, edge processing,” said David Hughes, an Oak Ridge remote sensing scientist and member of the development team.

The goal of the project was to detect, assess, and communicate energy infrastructure problems within 72 hours of a natural disaster, he said. To develop a prototype, the team first investigated available drones and found that none of them fit the requirements. So, they crafted their own solution. They started with a quadcopter Turbo Ace Matrix drone and then added a 1920- × 1080- pixel high-resolution commercially available camera and a PixC4-Jetson, which provided computing power for an Oak Ridge-developed module that acted as a flight controller and processor. The group developed the required assessment software in-house, using collected images of destroyed and damaged utility poles in the aftermath of Hurricane Ida and a tornado outbreak in Mississippi. To this collection, they added images that they captured by their drone while flying it over broken and intact utility infrastructure.

“These two data collections increased the robustness of the model and improved the overall classification performance,” said Lexie Yang, a research scientist and team member.

The researchers developed classification models based on the collected images and fine-tuned them to optimize accuracy and other performance measures. Then, they deployed the models on the drone for onboard processing. The models could categorize a pole as undamaged or damaged no matter the orientation of the structure. If the wind had snapped a pole in two and put part of it upside down next to a stub sticking out of the ground, the models would properly classify the pole as damaged.

PowerPhotonic Ltd. - Bessel Beam Generator MR 6/24

  Drone images enable onboard assessment of utility poles (blue bounding boxes) as damaged or not, speeding up recovery. Courtesy of Oak Ridge National Laboratory.

 
  Drone images enable onboard assessment of utility poles (blue bounding boxes) as damaged or not, speeding up recovery. Courtesy of Oak Ridge National Laboratory.

The inference model ran on the prototype drone while in flight. Thus, the system could report the location of damage, a few bits at most, as opposed to having to transmit an entire image, which would have taken too much bandwidth. The geolocation for damaged poles was ~8 m in accuracy for the bottom of the image frame, a consequence of the angle of those pixels. Yang noted that using multiple images would improve geolocation. Even at the largest location uncertainty, the information from the system provides a nearly real-time assessment for repair crews, disaster recovery managers, and others.

During testing, as described in a paper published in the February 2023 journal Photogrammetric Engineering & Remote Sensing, the researchers achieved a precision of 89%. That is, the system correctly identified a damaged pole nine out of 10 times, doing so fast enough to cover over 5 miles of terrain in the 30 or 40 minutes that the prototype could stay aloft.

The researchers developed classification models based on the collected images and fine-tuned them to optimize accuracy and other performance measures. Then, they deployed the models on the drone for onboard processing. The models could categorize a pole as undamaged or damaged no matter the orientation of the structure.
The group plans to move to fixed-wing drones that can stay airborne for hours. They also look directly down on the landscape, which can help when geolocating a damaged pole. Other plans call for expanding the system capabilities so that it can identify other damaged infrastructure, such as substations. Collecting images from different cameras as well as over various locations containing different types of infrastructure will improve classification model robustness, Yang said.

Such enhancements, along with improved classification model performance, will make deployment easier and less expensive, which are important considerations for something developed for use by local, state, and federal government agencies as well as other organizations. These groups need an easy-to-implement tool.

The goal of this project, similar to others done at the lab, is to create a demonstrator that solves a problem, and in the process do research that makes it possible for businesses to produce a commercial solution.

“It’s our job to explore areas of national need on difficult problems,” Hughes said.

Published: September 2023
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
photogrammetry
Photogrammetry is a technique used to obtain accurate three-dimensional measurements of objects and environments through the analysis of photographs or imagery. It involves extracting information about the shape, size, and spatial properties of physical objects or scenes from photographs taken from different viewpoints. The word "photogrammetry" is derived from the Greek words "photos" (light) and "gramma" (drawing or measurement). The process of photogrammetry typically involves the...
machine learning
Machine learning (ML) is a subset of artificial intelligence (AI) that focuses on the development of algorithms and statistical models that enable computers to improve their performance on a specific task through experience or training. Instead of being explicitly programmed to perform a task, a machine learning system learns from data and examples. The primary goal of machine learning is to develop models that can generalize patterns from data and make predictions or decisions without being...
machine visionphotogrammetrymachine learningedge processingVision in Action

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.