Search
Menu
Teledyne DALSA - Linea HS2 11/24 LB

Overcoming Contrast With 3D Imaging in Robotic Vision Applications

Jul 20, 2022
Facebook X LinkedIn Email
TO VIEW THIS WEBINAR:
Login  Register
Sponsored by
Teledyne DALSA, Machine Vision OEM Components
About This Webinar
Person discusses a machine vision trend: In many robotic machine vision applications, 3D imaging technology can be superior to traditional 2D solutions. In manufacturing, robotic applications are nothing new. Many of the applications take advantage of a camera system to help guide the robot to make intelligent decisions. Robotic machine vision applications use vision to locate or inspect and error-proof the workpiece.

The camera imaging technology comes in many types. One very common vision technology is a 2D digital image sensor that collects the light through a lens and provides a two-dimensional array of pixels. Each pixel has a position (row and column), followed by a value. In a grayscale camera, the value is the brightness, which ranges from 0 to 255. The vision software in grayscale cameras can be used to analyze the image based on the pixel location and contrast. Color cameras can help filter the color information of each pixel, but in robotic machine vision, grayscale cameras are much more common.

Another increasingly common vision technology is 3D. Most 3D imaging technologies output some type of point cloud, which is a cluster of xyz points. The vision software in 3D imaging can be used to analyze the point cloud based on the xyz location of each of the points.

Many applications for robots that use machine vision do not require 3D because, thanks to gravity, most workpieces handled by the robot sit flat on a conveyor, fixture, or work surface. Applications where the workpiece doesn’t sit flat typically require a 3D camera system so that the 3D position of the workpiece can be found.

Because a typical 2D image relies on each pixel’s brightness, lighting is a major consideration. It is important that lighting techniques create contrast between the target and its background, which can be a challenge for some applications where the robot must locate and handle a gray part on a gray conveyor belt. Using a 3D camera with its xyz point cloud is a great way to overcome the contrast issue. Machine vision software can take the 3D point cloud and create a pseudo 2D image where the z value of the point cloud is transformed into the grayscale value. In the pseudo 2D image, the darker pixels can represent points farther from the camera, and the white pixels can represent points closer to the camera. The vision software can use the pseudo image to locate the target or workpiece. When using the 3D image sensor, special lighting to create contrast between the part and the background is not necessary.

Person shows how 3D imaging sensors make many applications easier to set up and maintain as well as more robust compared to 2D cameras.

***This presentation premiered during the 2022 Vision Spectra Conference. For more information on Photonics Media conferences, visit events.photonics.com.

About the presenter:
Josh PersonJosh Person is a staff engineer in the machine vision group of FANUC America’s General Industries and Automotive Segment. He is responsible for supporting FANUC’s integrated robot vision, iRVision, for 2D, 3D, and inspection applications. He joined FANUC in 1996 and has focused on the company’s vision products since 2003. Person has a Master of Science in engineering management and a Bachelor of Science in automated manufacturing.
3D visionmachine visionroboticsVision Spectravision-guided roboticsSensors & Detectors
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.