Search
Menu
Zurich Instruments AG - Boost Your Optics July-August LB

Cameras with Facial Recognition Detect Driver Impairment

Facebook X LinkedIn Email
In 2021, 13,348 deaths occurred on U.S. roadways due to drunk driving — an increase of 14% compared to the previous year, according to a report from the National Highway Traffic Safety Administration (NHTSA). The NHTSA also estimates that, on average, 37 people die every day due to drunk driving.

In response, Congress implemented a federal requirement for all new passenger vehicles produced for the U.S. to have a system in place that can read the “tells” of impaired driving as part of its 2021 Infrastructure Investment and Jobs Act. This deadline can come as soon as 2026, which has spurred car manufacturers, their partners, and research institutions to seek innovative, cost-effective ways to detect signs of impairment in drivers without a major overhaul of their systems.

University of Michigan professor of electrical engineering and computer science Mohammed Islam holds up a direct time-of-flight (dToF) sensor (left) and a prototype development kit of a hybrid camera (right). Courtesy of University of Michigan.


University of Michigan professor of electrical engineering and computer science Mohammed Islam holds up a direct time-of-flight (dToF) sensor (left) and a prototype development kit of a hybrid camera (right). Courtesy of University of Michigan.


A research team led by Mohammed Islam, professor of electrical engineering and computer science at the University of Michigan, developed a system that relies on 3D cameras that can cost-effectively be added to advanced driver-assistance systems (ADAS), which have been used in cars for years. These systems include antilock brakes, traction control, and cruise control and will eventually provide lane detection, backup monitors, and interior cameras to track driver alertness.

Altering ADAS cameras

Because native interior cameras have already been widely implemented and proved to be useful at monitoring alertness in vehicles, the researchers decided that this would be the most viable technology to implement in their own system. Initially, the team decided to use indirect time-of-flight (ToF) cameras but decided to pivot once they found that this technology would be impractical for their system.

“When talking with Tier 1 partners, such as Valeo and Denso, they commented that indirect time-of-flight cameras had been used in earlier models of BMWs for gesture control and were deemed to be too expensive,” Islam said.

Instead, they looked toward other areas for their solution.

“We saw that 1-MP and higher resolution IR cameras were already being used in vision-based ADAS systems or driver monitoring systems,” he said. “Since we use the depth to compensate for motion artifacts, we thought we could just get a skeleton or frame from the 3D camera, and then co-register this with the existing ADAS IR camera.”

A graphic outlining the variables that the researchers were able to detect through testing their augmented advanced driver-assistance system (ADAS) cameras. Courtesy of University of Michigan and iStock.com/hallojulie.


A graphic outlining the variables that the researchers were able to detect through testing their augmented advanced driver-assistance system (ADAS) cameras. Courtesy of University of Michigan and iStock.com/hallojulie.


This augmentation would be in the form of either multizone direct time-of-flight (dToF) lidar sensors or a dot projector for structured light 3D cameras, that, similarly to a smartphone’s facial recognition application, work together to recognize the subtle fluctuations in a driver’s face.

dToF versus structured light

While both technologies are viable options, certain considerations must be accounted for between dToF and structured light.

According to Islam, one of these considerations is distance.

“In general, lidar technology, such as dToF, operates out to 4- to 5-m distances, while structured light works typically within about 1 m,” he said.

This may be negligible because the driver is usually between 30 and 70 cm from the camera.

Additional considerations included processing power, electronics, and market sizes. For example, dToF and lidar do not require significant processing power to work properly, so if the car manufacturer requires processing power from an additional source, then it might forgo structured light.

Lumencor Inc. - Power of Light 4-24 MR

Mohammed Islam compares the size of a direct time-of-flight (dToF) sensor to the tip of a pen. Similar to its use in the researchers’ augmentation, dToF sensors are implanted in smartphones for proximity sensors and facial recognition applications. Courtesy of University of Michigan.


Mohammed Islam compares the size of a direct time-of-flight (dToF) sensor to the tip of a pen. Similar to its use in the researchers’ augmentation, dToF sensors are implanted in smartphones for proximity sensors and facial recognition applications. Courtesy of University of Michigan.


On the other hand, if the constraint lies with the speed at which the detector electronics can process information, structured light would be the better option because it can operate much more slowly than dToF, which uses faster nanosecond pulses.

Because a primary goal is to provide a low-cost system, Islam recognized that the preference will most likely depend on which technology has the larger market. Regardless of the markets, however, in the team’s research, the proposed augmentations were expected to cost $5 to $10 each.

Test results

Once the augmentations were made and a pool of participants were found, the team was finally able to start testing. The team implemented AI and machine learning (AI/ML) to help acquire data points that would aid the ADAS cameras in recognizing impaired driving.

“The wonderful thing is that everyone is different,” Islam said. “But, for a measurement system, the problem is that everyone is different. Hence, we need[ed] to establish personalized baselines for every driver.”

Over time, the team used AI/ML to collect data on its participants to establish regions of interest from which data could be collected for analysis. Regions of interest included facial landmarks where it could specify abnormalities in variables such as blood flow.

Islam likens the anomalous occur-rence detection of AI/ML to fraud detection by a credit card company, in which it will flag variables such as increased blood flow or abnormal eye behavior, just like an odd transaction.

Through the testing, Islam and his team were able to clearly identify five variables closely related to symptoms of driving while under the influence, including increased blood flow to the face, eye behavior, changes in the driver’s heart rate, head position and body posture, and respiratory rate.

While eye behavior, posture, and respiratory rate can be measured using a normal 3D camera by analyzing the rise and fall of one’s chest, variables pertaining to vitals were able to be measured due to the inclusion of infrared light from the light projection and lidar augmentations.

Different forms of impairment

The researchers are working with Tier 1 partners on research and development and commercialization efforts for the technology. Denso sponsors part of the work through contracts with the University of Michigan, through which it receives early test results from the research team.

The support of these partners, with the help of a push for internal review board approval for expanded human studies, not only enables a better understanding of the technology’s current capabilities and how it can immediately affect the roadways, but also facilitates further augmentation in the near future.

“The [NHTSA] has said that the three main things to focus on are drunk, drowsy, and distracted driving,” Islam said.

Because of this, the team originally focused on these areas, but Islam anticipates the emergence of a need to detect drivers who are under the influence of substances aside from alcohol, and maybe even altered states of mind.

“Since cannabis also affects the autonomic nervous system, which controls the facial blood flow and vital signs, we think there may be telltale signs using the vision-based approach,” he said. “Beyond this, many studies have shown that facial expressions as well as facial blood flow can be used to detect emotions of a person. So, down the road, we may also be able to detect driver sentiments, perhaps flagging things like road rage.” ?


Published: July 2024
Vision in Action

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.