Search
Menu
Opto Diode Corp. - Opto Diode 10-24 LB

For Carmakers, Inspection Flexibility Makes the Difference

Facebook X LinkedIn Email
Innovations in vision systems and machine learning may be the key to simplifying auto inspections in the future as new technologies emerge.

HANK HOGAN, CONTRIBUTING EDITOR

Building a car is a complex process, and it’s getting more complex every year. There are thousands of parts to inspect for quality and conformance to ever tighter specifications. What’s more, these parts must be tracked, and variations in color and finishes must be verified.

Using multiple smart cameras is an effective way to add more inspection stations on an assembly line. Courtesy of Teledyne DALSA.

 
  Using multiple smart cameras is an effective way to add more inspection stations on an assembly line. Courtesy of Teledyne DALSA.

This array of requirements presents challenges to vision systems. Fortunately, advancements in lighting, 3D imaging, resolution, and processing power offer ways to meet these demands. Also, innovations such as flexible positioning and AI-based image processing promise greater performance in future vision solutions.

Today, AI-based tools enable vision systems to make the types of distinctions between good and bad parts that people usually do. This widens the range of inspections that can be automatically carried out.

“We can use artificial intelligence and machine learning to be able to pick up nuances that are much more difficult to apply through the traditional classical tools that are available in machine vision,” said Harry Kekedjian, controls engineering manager at the Advanced Manufacturing Center of Ford Motor Co. of Dearborn, Mich.

Walter LaPlante, a systems engineer at Ford Motor Co., works with a collaborative robot that operates safely alongside people without needing a protective cage. Such robots avoid people and objects through the benefit of 3D vision. Courtesy of Ford Motor Co..

 
  Walter LaPlante, a systems engineer at Ford Motor Co., works with a collaborative robot that operates safely alongside people without needing a protective cage. Such robots avoid people and objects through the benefit of 3D vision. Courtesy of Ford Motor Co..

AI can make up for deficiencies that stymie traditional machine vision. For example, artificial intelligence can compensate for less than ideal lighting and a lack of resolution by highlighting critical yet subtle distinctions. This capability makes it possible to reliably carry out an inspection that otherwise would be difficult or impossible to perform.

Still, Kekedjian noted, AI techniques should be considered only part of the total toolbox, as they present challen- ges. For instance, machine learning solutions must have a large and representative training set of example images of both good and bad parts. These can sometimes be troublesome to acquire, particularly if a manufacturing line is producing few bad parts.

Sometimes the distinction between good and bad can be made through simpler means, such as precisely measuring the distance between two features. Or, an AI-based specialized filter could be combined with a classical tool to improve application reliability.

Vision-guided robotics

Another important trend involves vision and robot guidance, Kekedjian said. Here, a vision system mounted on the end of a robot arm, for example, provides information that alters the trajectory of the arm. This information can be used to avoid a collision or to move a camera around an object, allowing for flexibility during inspections. With this approach, a single camera can do the work of many, and it becomes easy to switch from manufacturing one car model or part to another.

The popularity of vision-guided robotics is growing, enough so that ABB Robotics, a global manufacturer of industrial robots and robot systems, has established a vision center of excellence in Barcelona, Spain, to advance development and applications of the technology. The company’s 3D sensor technology uses structured light, which sends out a pattern of light and determines X, Y, and Z information from the way a reflection from an object distorts the pattern.

“What is really important is the accuracy and repeatability,” said Sergio Martin, business manager for ABB Robotics Spain. “We are talking about microns accuracy.”

ABB’s structured light approach, he added, pumps out 3000 lumens, equivalent to the output of a bright bulb or a strong set of headlights. This eliminates the problem of varying lighting caused by the environment, which in a factory may be due to the absence or presence of nearby welding or other bright sources. The company’s technology ensures that the quality of the lighting is constant, a critical factor in achieving repeatable results, Martin said.

A growing trend in auto manufacturing inspection, 3D vision improves quality checks as well as the ability of robots to pick up random parts and avoid objects. Courtesy of AB Robotics.

 
  A growing trend in auto manufacturing inspection, 3D vision improves quality checks as well as the ability of robots to pick up random parts and avoid objects. Courtesy of AB Robotics.

ABB’s technology captures data from 5 million points in three or so seconds, according to Jorge Rodriguez, manager of ABB’s 3D Vision & Metrology Application Center. Calculating the 3D co-ordinates of the points takes several more seconds. Such speed is essential in an automotive manufacturing environment because there may be only 30 seconds, at most, to perform an inspection and decide whether a part is good or bad. A similarly short time may be all that’s allowed for locating a part, determining its orientation and type, and then picking it up for placement and assembly.

In addition, a rapid 3D-sensing capability may be critical as robots move from behind protective barriers to settings where objects and people freely move about nearby.

 A robot equipped with two cameras performs a flexible inspection on an engine and chassis. Courtesy of Ford Motor Co..

 
  A robot equipped with two cameras performs a flexible inspection on an engine and chassis. Courtesy of Ford Motor Co..

“In the future, we really believe robots will need to work in collaboration with humans,” Rodriquez said.

Human-robot interactions

A synergistic working relationship between humans and robots may be the future of auto manufacturing and may push vision technology forward. But another important driver for vision is increasingly tight tolerances, said James Reed, vision product manager at LEONI Engineering Products and Services (LEONI EPS) in Lake Orion, Mich. As a machine vision system integrator, LEONI EPS is part of LEONI AG, which has its global headquarters in Nuremburg, Germany.

Excelitas PCO GmbH - Industrial Camera 11-24 VS MR

“The differentiators between good and bad are smaller and smaller now. So, your setup and software and everything else is much more susceptible to false failures and false passes,” Reed said of the effects of achieving tighter specifications.

An engine block (top must be inspected for pores (bottom) and other defects. Flexible camera positioning solves this challenging vision task. Courtesy of LEONI Engineering Products and Services.
An engine block (top must be inspected for pores (bottom) and other defects. Flexible camera positioning solves this challenging vision task. Courtesy of LEONI Engineering Products and Services.

 
  An engine block (top must be inspected for pores (bottom) and other defects. Flexible camera positioning solves this challenging vision task. Courtesy of LEONI Engineering Products and Services.

The solution is to set the good/bad criteria so that some good parts will fail, but no bad parts will get through. An increase in image sensor resolution and better lighting can improve vision performance so fewer and fewer parts get miscategorized.

There are other ways to wring out further classification improvements. For instance, taking multiple images with lighting at different angles can improve contrast. 3D scanning by structured lighting, time-of-flight ranging, or other technique offers another way to extract more information and distinguish good parts from bad.

BOA smart cameras from Teledyne DALSA can perform part identification and ensure traceability throughout the product life cycle. Courtesy of Teledyne DALSA.


 
  BOA smart cameras from Teledyne DALSA can perform part identification and ensure traceability throughout the product life cycle. Courtesy of Teledyne DALSA.

An example of a higher performance requirement can be found in a vision inspection system LEONI EPS designed for a new engine assembly line. The inspection system had to find pores, scratches, and other visual defects as small as a millimeter on the engine block, and as small as 0.8 mm on the cylinder head, depending on the location of the part. This was on machined and cast aluminum surfaces, which can be highly reflective and therefore difficult to visually inspect.

LEONI EPS solved this problem by mounting cameras on the end of one robot arm to provide the best camera orientation for each inspection point, with the arm moving from point to point. This also allowed for the flexible addition of further inspection points, if needed.

An inspection of valve assemblies designed to regulate the flow of gasoline into the carburetor. The main dial rotates so each completed assembly is moved within view of two Teledyne DALSA Genie Nano cameras and analyzed for height, alignment, concentricity, and runout. Courtesy of Matrix Design.

 
  An inspection of valve assemblies designed to regulate the flow of gasoline into the carburetor. The main dial rotates so each completed assembly is moved within view of two Teledyne DALSA Genie Nano cameras and analyzed for height, alignment, concentricity, and runout. Courtesy of Matrix Design.

Cognex Corp. of Natick, Mass., supplied the cameras. Their fast processing speed and patented algorithms enable the company’s vision solutions to better deal with real-world variations in parts and the environment, said Paul Klinker, a sales manager who handles global automotive accounts for Cognex.

Another boost to performance comes from higher resolution, such as from the 12-MP cameras Cognex now offers. Adding an innovative high-dynamic-range (HDR) camera increases performance as well.

“HDR basically allows the camera to see a greater range of dark and light in the image. This allows the camera to pull out details in the darkest and lightest areas,” Klinker said.

The automotive industry was one of the first to embrace machine vision. Courtesy of Teledyne DALSA.

 
  The automotive industry was one of the first to embrace machine vision. Courtesy of Teledyne DALSA.

AI-based tools, 3D vision, and techniques such as mounting a camera on the end of a robot will help address complexity in auto manufacturing. These methods will assist in situations where car parts of a variety of shapes and configurations — with holes, knobs, studs, and more — are present. There also are myriad materials in the mix, including aluminum, steel, fabric, and plastics, noted Steve Zhu, director of sales for Asia at Teledyne DALSA Inc., whose global headquarters is located near Montreal.

‘We can use artificial intelligence and machine learning to be able to pick up nuances that are much more difficult to apply through the traditional classical tools that are available in machine vision.’

— Harry Kekedjian, controls engineering manager at the Advanced Manufacturing
Center of Ford Motor Co.
What’s more, new and emerging automotive technologies — such as lithium battery-powered or hydrogen-powered cars — may demand new vision inspections. Consider that the battery in an electric car today accounts for a significant part of the total cost of the car. So, ensuring that batteries have no defects that will shorten their life is important, and vision systems play a part in making this happen, Zhu said.

Given the wide variety of parts, shapes, and surfaces to be inspected, as well as the desire by carmakers to let no defects through, it may be im- possible to carry out successful inspections with a single camera. Sometimes multiple cameras are necessary, a situation that adds its own complexity because the cameras must be synchronized.

When it comes to these relatively new technologies, such as lithium batteries, there can be differences in vision capabilities to consider. Today, a solution that mixes 2D and 3D sensing must account for the higher spatial resolution that 2D vision often provides as compared to 3D. Expertise must therefore be applied to the inspection problem to account for these factors. The ultimate solution can put machine learning and other technologies to work in tandem, thereby producing results that are accurate and repeatable. Still, any such solution must build up from the basics.

“You have to see the features of interest first before you’re able to do some analysis to recognize if [parts are] good or bad,” Zhu said.

Published: July 2019
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
machine visionFeaturescamerasindustrialImagingTest & MeasurementSensors & Detectorsautomotive

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.