Pixel-by-Pixel, Detector Captures Far-Away Targets in all Conditions
The capability to detect and track small, moving objects remotely, in all conditions, is crucial to many surveillance and monitoring missions. It is also an ongoing challenge.
According to Tian Ma, a computer scientist at Sandia National Laboratories, the greatest challenge for physical security surveillance is often caused by objects that are located far away from sensors, which appear much smaller than their actual physical dimensions. Sensor sensitivity diminishes as the distance from the target increases; when a target object is located far away from the sensor, it has a low signal-to-noise ratio (SNR), which makes it more difficult to detect. The limit of detection for remote sensors is bounded by what can be observed on a single image frame.
To fill the technology gap in remote sensing surveillance applications for small target detection, Sandia scientists led by Ma codeveloped a multiframe detection system that identifies and tracks slow- and fast-moving target objects as small as a single pixel, regardless of the object’s distance from the sensor or whether visibility is low. The Multiframe Moving Object Detection System (MMODS) finds the curves of motion in streaming videos and images from satellites, drones, and far-range security cameras, and turns them into signals used to detect and track small target objects. Image streams from various sensors flow into a computer station, and MMODS processes the data with an image filter, frame-by-frame in real time.
Sandia National Laboratories’ Multiframe Moving Object Detection System (MMODS) enables remote sensors to detect small moving objects that would normally be undetectable both to sensors and the human eye. Courtesy of Eric Lundin.
The system combines object detection processing with a dynamic motion estimation algorithm to enhance the target’s SNR. It uses a smart method to find, match, and integrate the target signals over a temporal frame sequence.
SNR and overall image quality improve as a result because the moving target’s signal can be correlated over time and steadily increases. Movement from background noise, such as wind, is filtered out because it moves randomly and is not correlated.
The researchers demonstrated MMODS on simulated data with single-pixel-sized objects. The objects had a targeted SNR close to 1:1, which meant that there was virtually no distinction between signal and noise. Objects with such a low SNR normally cannot be detected by sensors or the human eye.
The baseline detector system demonstrated a 30% chance of detecting a moving object. When MMODS was added to the system, the chance of detection increased to 90%, without increasing the rate of false alarms.
The researchers further demonstrated that their technique could achieve a similar improvement in real-world conditions. Using live data collected with a remote camera at the peak of Sandia Mountain, they showed that MMODS could detect vehicles moving throughout the city of Albuquerque without being given any information about the city’s roads.
MMODS does not require prior knowledge about the environment, pre-labeled targets, or training data to detect and track moving targets of any size, from any distance. It can be added as a modularized component to serve as an SNR booster in existing detection systems to improve probability detection while reducing false alarm rates.
“Given that a modern video camera has about 10 million pixels, being able to detect and track one pixel at a time is a major advance in computer vision technology,” Ma said.
The researchers plan to extend MMODS to handle nonlinear target movements. A patent has been issued for the technology.
The research was published in
Sensors (
www.doi.org/10.3390/s23063314).
LATEST NEWS