Alan Eddy is a machine vision specialist at Tensor ID, a system integrator that develops solutions to difficult manufacturing problems. The company has devised ways to overcome challenges for large automakers using vision to gauge spark plug gaps and piston rings and to determine the quality of finishes on cars. Recently, though, Tensor ID has been called upon to solve a whole new category of problems as automakers confront the demands of mass-producing a new technology. Four high-resolution cameras inspect an electric vehicle battery pack. AI improves the inspection performance. Courtesy of Tensor ID. “Over the last five years, we’ve done a lot of work with electric car companies,” Eddy said. “For that we use cameras, a lot of Teledyne DALSA high-resolution cameras, for inspecting the big battery packs that have 300+ batteries in a pack. They call it a clamshell.” He added that the batteries power the cars and are checked for things such as dents and the location of the positive and negative terminals (polarity). Automakers seal the clamshell only after the batteries pass inspection because defects can impact battery performance and operating lifetime, and flaws can lead to expensive warranty repairs, among other costs. The increasing activity Eddy has seen in electric vehicles should continue to accelerate. In 2023, analysts at research company BloombergNEF forecasted there would be over 100 million electric vehicles on the roads by 2026 and over 700 million by 2040. For comparison, there were only 27 million electric vehicles on highways at the beginning of 2023. Passenger electric vehicle sales will double to 22 million by 2025, representing 26% of sales, according to BloombergNEF. They will almost double again by 2030, reaching 42 million vehicles and 44% of sales. Even with new AI-based techniques, using the basics of machine vision lighting, optics, and sensors to attain the best images is still helpful. Courtesy of Smart Vision Lights. Dennis Sutton is a training manager with the BizLink Group’s automation systems training, which teaches automobile manufacturing customers how to most effectively use robots, vision, and other automation technologies. He pointed to another technology trend: lightweighting in response to meeting mandated increases in fuel economy. “There is a move away from traditional steel to aluminum and composite materials. With this move, there has been a high usage of glues and adhesives along with nontraditional joining applications that require a high degree of quality inspections,” he said. Predictive AI for inspections Machine vision users have a new tool at their disposal for meeting the needs of new inspections: machine learning-based predictive AI. It can be useful when the task involves detecting a battery dent or a rust spot, Eddy said. Sutton said that AI allows more flexibility in assembly and adds efficiency compared to previous vision systems. However, the fundamentals of the right lighting, the appropriate sensor, proper optics, and sufficient processing power still apply. Teledyne’s Astrocyte AI tool displays location heat maps of scratched and defective areas on a metal surface, a common inspection requirement in automobile manufacturing. Courtesy of Teledyne DALSA. “The camera is a raster scanning device gathering light, and the light determines the image,” said Steve Kinney, director of training, compliance, and technical solutions at Smart Vision Lights. The optimum lighting depends on the task, Kinney said. For instance, measuring the size of an object, such as a coin, may be performed most effectively with a backlight. To the eye, the coin that is illuminated from the back appears black, with no visible features. But this approach yields the most exacting results for the size of the coin because the light does not bend around the edges, which is something that occurs when the object is lit from the front while attempting to determine its diameter, Kinney said. Gauging the size of a bolt, measuring the width of a hole, or counting parts are examples of tasks that are well suited for traditional, rules-based machine vision. The output of the vision system might be a number, with the part passing or failing depending on a measurement. Other automotive manufacturing tasks, such as inspecting a painted surface for a bubble, are more challenging, said Szymon Chawarski, product manager at Teledyne DALSA. People can spot a bubble easily but programming a rules-based vision system to do the same is difficult due to the varying contours and reflections. Predictive AI models, though, can be trained to do jobs that are comparable to humans for determining whether an area contains a bubble or not, he said. A defect in an airbag. The detection of flaws on certain backgrounds, such as fabric, can be improved by training AI with a representative image database of good and bad parts. Courtesy of Cognex. A scratch in an electrode coating. The growing number of electric vehicles means automobile manufacturers must complete new inspections, and they may turn to AI to carry out these inspections. Courtesy of Cognex. Another area in which AI excels is optical character recognition (OCR), Chawarski said. For example, characters are stamped onto cars for vehicle identification numbers and lettering is printed on small plates. Correctly identifying a letter on an uneven, shiny metal part can be hard for a rules-based vision system. “OCR is an area AI is really good at. We are seeing a lot of OCR algorithms that are better than anything we ever had before,” Chawarski said. For OCR, predictive AI is trained with images of characters, such as an A, along with images without characters or with different characters, such as a B. For spotting bubbles on painted surfaces, predictive AI is trained with images of surfaces with and without bubbles. The model learns by adjusting the weight, or importance, of neurons using feedback and feedforward mechanisms. These adjustments continue until the results achieve the desired threshold. For example, the model may recognize an A as well as other letters or a bubble with sufficient accuracy and speed. While generative AI — such as ChatGPT and other large language models — is allowed to be wrong, that is usually not the case for predictive AI, according to Chawarski. Often, the accuracy needs to be close to 100%, particularly in automotive applications in which an overlooked flaw can result in expensive scrap or, worse yet, injury or death if a vehicle makes it onto the road with a critical defect. Further, Chawarski said that AI must have images to work with, which requires the images to be reliable examples for the application of good and bad parts. The number of images required in the training data set is related to the quality of the images and visibility of the defects, flaws, or items of interest. Consequently, with the correct lighting, sensor, and optics, fewer images are required. AI complements traditional vision tools, which are still the best method for making exact measurements, according to Michael Chee, senior manager of product marketing for the vision software product line at Cognex. AI, however, does make it possible to deal with what had previously been effectively impossible to manage, he said. The rough and variable texture of a fabric surface, for example, makes it difficult for engineers to come up with a rule to detect a scratch. This task, though, can be handled with AI when it is trained with the right image data set. “If you teach it with enough data, it will know enough to be able to differentiate the textured background from a foreign foreground object, like a defect,” Chee said. He said that using AI for surface defect inspections can be more effective when it is paired with computational lighting technology that renders defects more visible. These applications involve the inspection of products that are moving through a manufacturing process; therefore, systems must complete the task fast enough to keep pace with product flow. For some high-speed processes, achieving a short enough inspection-and-decision time requires the use of industrial line-scan cameras connected to industrial PCs, such as for the inspection of the coating and surfaces of battery electrode sheets. Chee said that meeting the specifications for these components is critical to the performance, lifetime, and safe operation of batteries and electric vehicles. AI on smart cameras Smart cameras — which Teledyne’s Chawarski said are embedded edge vision systems — have more processing power than they did in the past. Accompanying their increase in capability has been the development of leaner inference models, which are used to classify parts as good or bad, or to detect defects. These leaner models do not demand as much processing power. Therefore, it is now possible to run AI on smart cameras in some applications, Chawarski said. For maximum accuracy models, though, the best practice is to train AI using an image data set and to develop the inference models on a computer processor that is more powerful than the one found in a smart camera, according to Chawarski. Though once it is developed, in some cases the inference models can be refined to operate with a less powerful processor. The combination of AI, smart cameras, and robotics makes it easier to overcome manufacturing problems. According to Sutton, one manufacturer used a vision system to inspect electrical connections, ensuring that they were all in place, but an engineering change put the connections out of the field of view of the vision system, causing inspection failures and a loss of production. “With smart vision paired with robotics and AI, this could potentially be avoided,” Sutton said. AI does place additional demands on newly deployed vision systems. For instance, part of any deployment involves validation that confirms that defects are detected and that bad parts are rejected. This pass/fail, though, cannot classify too many good parts as bad or bad parts as good. The validation of a rules-based system is a well-established process, Chawarski pointed out. AI, though, may be used to handle inspections that were previously performed by humans and these tasks tend to have some subjectivity. An AI vision system may identify something as a bubble, but a human inspector may not, and vice versa. Therefore, there will be gray areas pertaining to bubbles, scratches, and other defects that machines and humans do not agree on. A good manufacturing process design takes this gray area into account and validates AI inspections based on their real-world performance, Chawarski said. “You need to be at that 99.9% accuracy, provable, in order for something to be useful in a factory,” he said. Finally, Tensor ID’s Eddy does not see AI taking over every automotive vision task, but he does see it playing an important role. “That 10% of what automakers couldn’t do before with traditional machine vision — AI will bridge that gap,” he predicted.