SOUTHAMPTON, England, Feb. 12, 2021 — Researchers from the University of Southampton, in collaboration with a team from San Francisco-based nanotechnology company PointCloud Inc., developed a scalable 3D lidar imaging system that the collaborators said matches and exceeds the performance and accuracy of most mechanical systems currently in use. The cost-effective device may provide a path to large-volume production of compact, inexpensive, and high-performance 3D imaging cameras for use in robotics, autonomous navigation systems, mapping of building sites to increase safety, and health care.
The integrated system, developed by the university’s Optoelectronic Research Center (ORC) and PointCloud, uses silicon photonic components and CMOS electronic circuits in the same microchip.
Swivel chair and screen at 40 m. Picture taken using a 32- × 16-pixel sensor (2- × 2.5-mm sensor size). Courtesy of PointCloud.
According to Remus Nicolaescu, CEO of PointCloud, all current optical imaging for production systems are based on FPA (focal plane array) imaging configurations, whether for 2D or 3D imaging. All 3D imaging systems based on FPAs are paired with amplitude modulation ranging.
The system combines an FPA imaging architecture with frequency modulation/coherent detection on an array, Nicolaescu told Photonics Media. Therefore, it essentially harnesses the best of both worlds — the scalability and simplicity of FPAs and the performance of coherent ranging, he said.
The team reported it demonstrated a range of 512 (32 × 16) pixels, which could be scaled up on an integrated silicon photonics platform.
The team borrowed techniques from communications technology to ensure that stray light, such as sunlight, does not interfere with the light that forms the image, said Graham Reed, a professor of silicon photonics at the ORC.
“Using this approach, we have a conceptually simple layout of detectors that can scale to either much more dense detector arrays or to much bigger arrays to produce higher-resolution images, and also to capture images at long distances,” Reed told Photonics Media.
One of the major pros of the design in terms of its scalability is the use of silicon.
“Our reported device is an all-silicon system, except the laser; all the modulation, beam steering, and detection are integrated monolithically on a silicon chip, which can be manufactured in a commercial foundry,” Nicolaescu said. “The cost for such a silicon chip can be much lower than the existing systems, especially in large production volume.”
On the other hand, Nicolaescu said, existing systems with similar performance typically require grouping of many discrete components in a bulky package. That includes the laser, modulator, photodetector, and the mechanical moving parts used to steer the light. All of those components would need to be aligned precisely to produce the optical signal.
Those types of designs, Reed said, do not benefit from the economy of silicon photonics, which already manufactures silicon chips at high quality and at large production volumes.
The device developed through the collaboration is able to perform at the same level as existing lidar systems, but in a much smaller package. Tests of the prototype showed that it can achieve an accuracy of 3.1 mm at a distance of 75 m.
The detailed differences in performance between the researchers’ system and existing ones would depend on what system they’re comparing with, Nicolaescu said. Monolithic integration gives the team’s device a compact configuration, compared to many existing mechanical systems.
The researchers now aim to minimize the optical loss of the whole system and to increase the detection accuracy, Nicolaescu said.
The research was published in Nature (www.nature.com/articles/s41586-021-03259-y).