Search
Menu
Meadowlark Optics - Wave Plates 6/24 LB 2024

3D Sensing Bolsters Robotic Guidance

Facebook X LinkedIn Email
HANK HOGAN, CONTRIBUTING EDITOR, [email protected]

With prices falling and performance rising, 3D vision, or depth sensing, is showing up in new applications, including allowing robots to map places and tasks, such as figuring out how best to avoid people. Other uses include pick-and-place, combined assembly and inspection, and moving shelves of items from one location to another.

These and other applications depend upon inexpensive and powerful 3D vision, and there are several competing technologies. All have strengths and weaknesses, with different working distances, resolutions, throughput, and price points. Each has a significant market presence, in part because there is no single best solution for every situation.

Artist’s conception of 3D imaging in an automotive application. 3D imaging is playing a key role in pick-and-place, assembly, and inspection.

Artist’s conception of 3D imaging in an automotive application. 3D imaging is playing a key role in pick-and-place, assembly, and inspection. Courtesy of Newsight Imaging.

“Depending on the required performances and market constraints, all these technologies find applications,” said Alexis Debray, a technology and market analyst in optoelectronics at Yole Développement SA.

For 3D vision, the competing technologies include many flavors of lidar and time-of-flight cameras, which send out bursts of light and measure distance by the amount of time it takes to get a return signal. There is also structured light, which determines distance by measuring distortions in a light pattern projected onto an object. Another technique is laser triangulation, which extracts depth from the location the laser spot appears in a camera’s field of view. Finally, stereoscopic depth sensing uses the differences between two camera images to calculate distance.

Except for stereoscopic imaging, these approaches require some form of illumination. This can be a benefit in an industrial setting where lighting is uncertain, but it does increase power consumption, a possible issue when running on a battery.

Power and illumination

Some of these technologies have seen a significant improvement in both power and size. For instance, structured light depth-sensing capabilities — that in 2009 consumed 10 W and were the size of a brick — today require only 250 mW and a thumbnail-size device using time-of-flight technologies, said Mitch Reifel, vice president of sales and business development for time-of-flight sensor maker pmdtechnologies AG of Siegen, Germany, and San Jose, Calif.

Evolution of 3D vision, or depth imaging, solutions.

Evolution of 3D vision, or depth imaging, solutions. Here are three solutions of roughly equal capabilities, using two different methods (structured light and time-of-flight). There has been about a tenfold reduction in all dimensions over the last decade. Courtesy of pmdtechnologies.

Thanks to these advancements, a new application is now possible — robots that move shelving around in large warehouses. Depth sensing can help a robot identify what’s a shelf and what’s not, as well as help keep track of objects on the shelves themselves.

“The shelves are portable,” Reifel said. “The robots pick them up and move them around. The items on the shelves can move, and these 3D sensors, since they’re depth sensors, they can say something slid and is now too close to the edge or not close enough.”

Depth sensing is finding increasing applications in robotics, said Garrett Place, business development for robotics perception at ifm efector Inc. of Malvern, Pa. The U.S. company is a subsidiary of Germany’s ifm electronic GmbH, an automation solution supplier that owns pmdtechnologies.

Real-time 3D streaming being developed for gaming and entertainment could have industrial uses in training and simulation.
A new application made possible by better 3D vision is depalletizing. As the name implies, this involves robots taking boxes off a pallet and putting them on a conveyor, a process that takes place when goods enter a facility. The use of a 3D camera has made the job of locating a box quicker and thereby has improved the efficiency of the operation.

“We have zero latency between one pick and the next pick other than the robot movement,” Place said. “This is huge. We’re saving 3 to 5 seconds per box.”

Another new application example is automated forklifts. In this task, the autonomous vehicle must locate the pockets into which it should slide the forks. Gathering this information with a 3D camera results in faster and more certain accomplishment of the task, thereby improving both the speed and safety of the operation.

An emerging use made possible by falling prices can be found in service applications. Suppose a robot arm offers a person a drink or another object. Depth sensing is important in getting the item safely to its target. But as the arm nears its destination, the arm, object, or both may block the view. Placing an inexpensive 3D sensor in the arm could provide a close-up view and therefore improve safety and performance. Doing so requires synchronization and data overlay between the cameras, as well as information fusion with any other sensors.

Such information fusion could be important when various 3D technologies are employed together, as could be the case when using products from Newsight Imaging of Ness Ziona, Israel. The sensor maker offers laser triangulation-based and enhanced time-of-flight-based solutions, with the goal of creating a point cloud of distance data, said Eli Assoolin, CEO and company co-founder.

Hand-held scanner uses structured light technology to capture accurate 3D data out to 4.5 m.

Hand-held scanner uses structured light technology to capture accurate 3D data out to 4.5 m. Courtesy of Mantis Vision.


Teledyne DALSA - Linea HS2 11/24 MR
The price of gathering such data has dropped significantly, declining from tens of thousands of dollars for a lidar-based solution years ago to a few hundred dollars today. Solutions implemented with other 3D sensing technologies are even less expensive and could possibly be in only the tens of dollars, according to Assoolin.

This trend toward lower costs should continue and could even accelerate, as 3D imaging will soon show up on every smartphone. Such technology cannot be directly applied to industrial settings, but the sheer volume of chips could mean that prices for those bound for nonconsumer applications could also fall.

As for what these new depth-sensing solutions could be used for, one possibility is inspection, Assoolin said. Currently, advanced sensors today do a line capture of a few spots at a relatively close distance. The chips that Newsight can now provide enable improved inspection and can be deployed in a wider array of settings.

“Our chip is running 40,000 frames per second and is able to detect defects on the scale of a micron,” Assoolin said. “We make everything much faster, much more accurate, and much more affordable.”

Falling prices and improved performance enable new 3D imaging applications, such as access control.

Falling prices and improved performance enable new 3D imaging applications, such as access control. Courtesy of Mantis Vision.

When considering 3D vision in an industrial setting, the environment can have a significant impact on applications and performance, said Shabtay Negry, chief business officer for Mantis Vision Ltd. in Petah Tikva, Israel. The company provides 3D scanning, imaging, and video capture solutions. These are based on a patented algorithm for structured light that offers the most accurate model in terms of data quality, Negry said.

Temps affect accuracy

3D accuracy can be degraded by temperature changes as well as vibration, both of which can be present on a factory floor or in other industrial settings. Temperature changes tend to be continuous, whereas vibration tends to cause sharp disturbances as engines cycle on and off or objects strike each other. Either can shift the structured light pattern, and this can lead to inaccuracy in the 3D results. So, steps should be taken to counter these effects, Negry said. In the case of Mantis Vision, the company’s products use proprietary methods to do an on-the-fly autocalibration and thereby correct for environmental effects.

Regarding emerging applications, the development of real-time 3D streaming for gaming and entertainment purposes could also be used for industrial training and simulation. An instructor, for instance, could demonstrate how to perform a task, with multiple 3D and other sensors capturing every move. The system could then replicate the demonstration at a distant location.

“Real-time streaming with multiple synchronized sensors opens up new markets and new communication channels,” Negry said.

Improvements in 3D system performance are indeed expanding the range of possible applications, said Raymond Boridy, project and product manager for the industrial division of Teledyne DALSA Inc. The Montreal-based system maker is launching a line of laser triangulation products. Other Teledyne divisions offer time-of-flight and stereoscopic technology.

Advances in laser triangulation have resulted in increased speeds and improved outcomes, with higher accuracy and greater precision, Boridy said. He added that the company’s products will be able to distinguish objects on a 10-µm scale, which is important when doing a high-precision inspection.

“We are able to … distinguish, for example, scratches or faults or holes or defects that can be as small as 20 µm,” Boridy said.

This capability is useful in some settings but not others. For instance, a large part does not need such a fine inspection. It does need repeatability in results, the ability to spot larger defects, and speed. Thus, a 3D vision system must match the application requirements, something impossible to do optimally with a single depth-sensing method. That is why all current methods have substantial market share, with no clear technology leader for all applications.

No matter the 3D sensing technology, though, a complete solution demands more than imaging alone, as simply knowing distance is not enough. That is illustrated by considering the use of depth-sensing lidar technology for autonomous vehicles.

“The data coming for the lidar must be computed, interpreted, and fused with that of other sensors, possibly using artificial intelligence. This is an important challenge, which, if not properly addressed, makes lidar useless,” Yole’s Debray said of the situation, highlighting the need for a complete solution.

Published: September 2018
Glossary
lidar
Lidar, short for light detection and ranging, is a remote sensing technology that uses laser light to measure distances and generate precise, three-dimensional information about the shape and characteristics of objects and surfaces. Lidar systems typically consist of a laser scanner, a GPS receiver, and an inertial measurement unit (IMU), all integrated into a single system. Here is how lidar works: Laser emission: A laser emits laser pulses, often in the form of rapid and repetitive laser...
3D sensing3D visionPMDtechnologies AGlidarAlexis DebrayYole Developpementroboticsifm electronicdepalletizingdepth sensingMantis Vision3D ScanningImagingvideo capture Teledyne DALSAFeatures

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.