Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


For Self-Driving Cars, Sensors Galore

HANK HOGAN, CONTRIBUTING EDITOR

Self-driving cars bring both opportunities and challenges to photonics-based sensors. Autonomous vehicles will need dozens of sensors of many types — lidar, camera, radar, and ultrasonic — and many sensors on millions of vehicles make the potential opportunity significant.

The challenges are many as well. Cameras and lidars must operate for years across widely varying temperatures, illumination levels, and target reflectivities, while reliably producing the data needed for safe navigation. This performance has to be delivered at low cost, a feat that could be likened to providing aerospace quality technologies at consumer prices.



Lidar image, color coded to reflect intensity, with the matte black tires of the vehicles easily seen. Such capability is important because pieces of rubber tires are often present on roadways and self-driving cars need to avoid them. Courtesy of Quanergy Systems.

So, lidar and camera vendors are innovating. If self-driving cars make their initial mass-market debut in a few years, it will be in part because of such advances.

The requirement for many types of sensors has its roots in the basic capabilities of the various technologies, noted Alexis Debray, technology and market analyst at Yole Développement. “Cameras have a good spatial resolution but lack depth information,” he said. “Radars have a good sensing of depth but are limited in lateral spatial resolution. Lidar can achieve both with medium resolution.”

According to Debray, cameras offer 100× the spatial resolution of radar. At about 20× the resolution of radar and a fifth that of cameras, lidar falls in between the other two, filling a gap in coverage.

There’s another reason to use multiple sensor types. “For safety reasons, redundancy of information from several sensors is needed,” Debray said.

Thus, both lidar and radar are used to provide information about distance to an object, along with the rate at which the gap changes. Cameras capture color and other characteristics. This sensor data is used to classify objects as benign, such as a blowing plastic bag, or critical to avoid, such as a pedestrian or another vehicle. Cameras also enable reading of street signs and detection of lane markings.



Raw lidar sensor data as viewed through software from VeloVision of San Jose, Calif. This type of information will be used by self-driving cars for navigation. Courtesy of Quanergy Systems.

Lidar is being installed in mass-market cars for the first time, with some systems covering a limited range and distance. Systems intended for self-driving cars, however, need to be able to see around the vehicle at a high enough resolution and far enough out. These requirements can pose a challenge, said Anand Gopalan, chief technical officer at Velodyne LiDAR of San Jose, Calif. The company is involved in many autonomous vehicle projects.

Peak power

Suppose a self-driving car is going 75 mph, and a few hundred meters (or less than 1000 ft) ahead, black tire fragments litter the road. Tire fragments have about 10 percent reflectivity, according to Gopalan. In this scenario, he said, billions of photons will go out but only a few hundred may come back.

“We need high peak power on the laser,” Gopalan said, because this gets as many photons as possible into a distance-determining pulse. “The detector needs high sensitivity and low noise.”



Lidar image, color coded to reflect intensity. The lane markings “pop” when viewed by intensity. Courtesy of Quanergy Systems.

The information that comes back must be assembled into a point cloud of distance data, which then needs to be interpreted. The system may only have a second to do this and take some sort of action, as the stopping distance of a car traveling at highway speeds can be a hundred meters or more.

Industry players are starting to plan for the first wave of autonomous vehicles, with an anticipated rollout of a few hundred thousand by 2022, Gopalan said. He added that in terms of performance, scalability, and cost, sensors are right where they need to be for this initial wave. As volumes increase, costs will come down, he predicted.

One way to cut costs may be by making the sensors smarter and therefore able to provide a more highly abstracted view of objects. It takes power to push data through a car’s network back to a central processing unit, where analysis requires even more energy and computational resources. A smarter lidar could reduce those demands and thus cut costs. There’s no consensus in the industry on whether smarter sensors are the way to go, but Gopalan said some sort of greater data abstraction will probably be necessary in the future.



CMOS image sensors such as this one provide a range of resolutions suited to new-generation driver assist and autonomous driving systems. The sensors help classify objects and thereby enable safe navigation. Courtesy of Quanergy Systems.

Another way to meet autonomous vehicle demands is by rolling out new technology. Quanergy Systems of Sunnyvale, Calif., provides one example. According to Louay Eldada, CEO and co-founder, the company’s core expertise lies in optical phased array technology.

Via interference among thousands of optical antenna elements, the company’s chips form a beam that can be steered without using any moving parts. The result is smaller and more rugged than a mechanical equivalent, Eldada said. The systems need little power, he added, in part because the fast actuation enables a small interrogating beam spot to hop around rapidly. Consequently, the surroundings can be covered by a low intensity laser at a high frame rate.

“It’s an IR laser beam. We operate at 905 nm because we want to use silicon for all of our chips — emitter and receiver,” Eldada said.

Multiple lidars

Several lidars are likely to come on each vehicle, Eldada noted. The autonomous car must be able to see everything around it, and automakers want a refresh rate of up to 50 fps. This is difficult to accomplish with a single lidar. If the sensor is located so it doesn’t affect the body style, this means the vehicle itself will partially block the view. As a result, each self-driving car will need anywhere from three to five lidars, Eldada said, with four being the optimum number. They will need to be in the price range of a few hundred dollars per lidar.

Startup LeddarTech of Quebec City brings another approach to hitting cost and performance targets. Frantz Saintellemy, president and CEO, said the company develops core sensing technologies and provides its customers with complete instructions on how to construct lidar systems for automotive applications.

Today a car normally sits idle 22 or 23 out of every 24 hours. Self-driving cars may spur an expansion
of ride-sharing. If so, the ratio between use and nonuse could flip, with fully autonomous cars running 20-plus hours a day.
The technology is based on signal acquisition and processing techniques that allow a returning pulse to extract more information. This leads to acceptable performance with less costly components, according to Saintellemy. For instance, the company’s reference design is able to look hundreds of meters out while using 905-nm lasers and detectors, which are less capable and much less expensive than 1550-nm sources and detectors.

“Today we chose to focus on 905 nm because that’s the more practical way,” Saintellemy said. “As 1550 technology becomes more standardized, we can significantly increase our performance by choosing 1550. But today we don’t need that to achieve the performance that’s required.”

Multiple cameras

Self-driving cars may carry more than 10 cameras per vehicle, according to Yole Développement. This compares to a third that number, at most, in today’s cars. What’s more, cameras will have to be less noisy, better calibrated, and more robust than those in the most advanced cars today.

“Cameras for self-driving cars are similar to those used in machine vision in factories,” Debray said.

The resolution of the cameras will range from 2 to 8 MP initially, with a transition to higher resolutions later, according to Andy Hanvey, head of automotive marketing at OmniVision Technologies of Santa Clara, Calif. This imaging capability brings benefits.

“As resolution gets higher, this enables the camera to see farther away and detect objects or hazards sooner,” Hanvey said. “This is important for an autonomous car, as it can see hazards from a greater distance.”

One reason for the increased robustness requirement is that autonomous vehicles may spend significant time on the road. Today a car normally sits idle 22 or 23 out of every 24 hours. Self-driving cars may spur an expansion of ride-sharing. If so, the ratio between use and nonuse could flip, with fully autonomous cars running 20-plus hours a day.

“This requires higher reliability for the sensors in the same way that industrial or commercial equipment must be more rugged than consumer-grade equipment,” said Geoff Ballew, senior director of marketing in the automotive sensing division of ON Semiconductor, a Phoenix-based chipmaker. According to Ballew, the company is the leading supplier of automotive-grade image sensors.

Reliability also requires cybersecurity. A malicious hacking of a self-driving car could lead to a deadly outcome. The industry is addressing this through a standards body. ON Semiconductor is already putting the suggested features into its sensing products, Ballew said.

As many as 20 cameras of varying resolutions could ultimately be placed in self-driving cars, according to Ballew. This number would ensure a vehicle could adequately see what was going on around it, without having views blocked, and with more than one camera covering every viewpoint. Imaging must be clear enough that a processor could decide and take quick action after combining the picture a camera sees with the information other sensors report.

“The key is to get 360-degree coverage on the car, with any segment in the field of view being covered by at least one camera and with both near-range and far-range coverage. This allows for every angle of the car being covered with built-in redundancy,” Ballew said.



Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media