NICK TEBEAU, LEONI ENGINEERING PRODUCTS & SERVICES INC.
Integrators and operations managers must consider numerous choices when implementing turnkey vision systems in industrial environments. Depending on the application, systems are generally developed in four broad categories: single or multiple smart cameras; cameras interfaced to PC-based systems; robot guidance systems that use a robot controller to perform the tasks generally relegated to a PC; and custom libraries-based solutions. While these classifications are broad, there is often overlapping functionality between them. A turnkey solution is dependent on creating a solution with any of these technologies.
Communication software allows vision systems running Matrox Imaging software to directly interface, over an Ethernet link, with controllers from ABB and FANUC. Courtesy of Leoni.
Smart cameras
The growing capability of smart camera systems means they can be a dedicated solution for a very specific application. Because smart cameras combine vision sensors, image processing, and more in a compact package, they are ideal for applications ranging from optical character recognition (OCR) on pharmaceutical packaging to scratch detection on electromechanical assemblies. A smart camera can trigger lighting products, store multiple images, and interface to external I/O peripherals such as motor controllers. Smart camera vendors often offer ranges of products that incorporate different types of CCD or CMOS image sensors, memory, processors, external triggering, and on-board memory storage. This allows developers to choose from a number of software-compatible products so that, should a system need upgrading or reconfiguring to perform additional functions, it can be accomplished relatively easily.
Because smart cameras are programmable, they can be used for applications such as presence/absence inspection, feature location, part identification, measurement and gauging, and measurement. Some smart cameras can be offered as turnkey systems to perform specific tasks, such as barcode reading and verification, many others can be modified for any number of applications. They can detect the correct number of laundry-soap pods in a package; identify the style, diameter, and color of a wheel in automotive inspection; and verify the proper placement of components on a printed circuit board.
For the various applications, smart camera manufacturers offer products with their own development systems, allowing systems integrators to configure the camera to meet the needs of the applications, often by first programming the camera’s software using a graphical user interface (GUI) on a PC and downloading this software onto the smart camera for initial testing and deployment.
Smart cameras can detect the correct number of laundry-soap pods in a package; identify the style, diameter, and color of a wheel in automotive inspection; and verify the proper placement of components on a printed circuit board.
In some cases, smart camera vendors only offer the machine vision software they have developed. While this may, at first, seem somewhat limiting, it ensures that the manufacturer can fully support the customer should any development assistance be required.
At the same time, vendors of smart cameras that use general purpose x86 processor derivatives and their own machine vision software in their products have realized that they can leverage this software into third-party smart camera offerings. Thus, many established machine vision camera companies and independent machine vision software vendors now allow third-party smart camera vendors to incorporate their software. While software support from third-party smart camera vendors may be limited, such innovation lets developers choose from a range of smart cameras and both commercially available and open-source software, allowing them to develop the most effective solution for their machine vision applications.
Power to the PC
Despite their many advantages, smart cameras are limited in their processing capability, sensor choices, rate of image capture, I/O, and storage capabilities. Therefore, for high-speed imaging applications where data must be captured and processed rapidly, PC-based systems are often used in turnkey solutions. For example, CMOS image sensors of 5120 × 5120 10-bit pixels are now used in some machine vision applications at frame rates of 80 fps, creating a data bandwidth of approximately 2 Gb/s. Running cameras based on these speeds requires very high-speed interfaces such as CoaXPress (CXP) and PCIe-based frame grabbers to transfer the image data to the host PC.
Such open PC-based systems allow developers to choose from a number of frame grabbers, data acquisition, motor controller, lighting, network and graphics boards with which to create a fully integrated, turnkey machine vision and automated control system for virtually any application. To aid in the choice of these products, consortiums of board-level product vendors have developed websites that feature PCI Express, CompactPCI, and PC/104 Plus boards — all of which can be used to build vision systems with different camera interfaces and I/O options.
With the ability to add additional capability into PC-based vision systems, developers are faced with a more complex systems integration task. To alleviate some of the development time, vendors have taken the concept of PC-based system solutions and offer what are known as embedded vision controllers or embedded vision processors. Many of these x86-compatible products are supplied with or without a fan (for harsh environment applications) and integrate the functions of PC-based systems such as the 6th Generation Intel Core processors, Power over Ethernet (PoE) ports, GigE ports, USB ports, and isolated digital I/O, alleviating the need for developers to configure PC-based systems with these features.
To make systems integration easier, some are fully tested with PCIe-based high-speed camera frame grabbers based on the Camera Link or CXP interfaces, while at the same time offering PCIe-based expansion slots should developers care to expand their capability.
Just as with PC-based systems, these embedded vision processors can be supplied with non-real-time operating systems such as Microsoft Windows and Linux, as well as real-time extensions to these operating systems and/or integrated real-time operating systems. This allows developers to add machine vision capability using either commercially available or open-source software and, if necessary, add real-time capability to optimize deterministic real-time response. Armed with this extensive ecosystem of components, integrators can develop highly optimized machine vision solutions that overcome the challenges of the highest-speed, most complex automate inspection applications.
Vision-guided robots
Few machine vision customers would list robot controllers, the heart of many of today’s vision-guided robot (VGR) applications, under the “turnkey” category. But as robot controllers have leveraged the same advances as PC and embedded controller OEMs, robot manufacturers have extended their capability to allow single or multiple cameras to be used in conjunction with the robot. Whether placed beside or on the robot itself, these cameras can be used to extract part position data from a scene and be trained to guide a robot to its destination automatically.
To improve the integration of the machine vision and robot systems toward a unified solution, a number of robot vendors have collaborated with software vendors to build communication software that allows vision systems to directly interface, over standard network protocols such as an Ethernet, with robot controllers. Pairing vision systems with robots to automate tasks then allows systems developers to use intuitive flowchart-based integrated development environments (IDEs), allowing manufacturing engineers and technicians to configure and deploy machine vision applications without the need for conventional programming techniques.
While effective, the limited processing power of these robot controllers means that turnkey systems with these platforms may not be sufficient for complex image processing applications without diminishing robot cycle time.
Custom libraries-based solutions
Smart cameras, PC-based vision systems, embedded vision systems, and robot controller-based systems can all be used to develop turnkey solutions for manufacturing tasks thanks to growing computational power and intuitive programming interfaces. When it comes to developing a truly customized system that can tackle difficult applications with minimal technician training, a certified systems integrator (CSI) is usually the best bet.
LEONI’s Wheel and Tire Validation System prevents mismatched automotive wheel-tire combinations from being made, simplifies changeover and training for new models of tires and wheels, and provides 3D and 2D surface analysis of key features. This system identifies defects such as a mark on a tire (a), and validates the color, style, and orientation of the center cap (b). Courtesy of Leoni.
Where large companies cannot afford the engineering time and cost of systems development, purchasing a custom libraries-based solution is a viable solution. For the systems integrator, the relatively high cost of building such a system can then be amortized over a number of systems and will require little adjustable engineering cost. This can make the development of such systems comparable with building a PC-based system.
One example of such a turnkey custom solution is a wheel and tire validation system designed to prevent mismatched automotive wheel-tire combinations. It also simplifies changeover and training for new models of tires and wheels, and provides 3D and 2D surface analysis of key features.
External inspection of integrated circuits, for example, can be completed with a single Omron FQ2 smart camera (c). Graphical Adaptive Vision Studio 4.10 software from Adaptive Vision can be used with smart cameras from ADLINK and New Electronic Technology (d). Courtesy of Leoni.
Already installed at multiple sites across North America, the system can perform multiple inspection tasks of both wheels and tires, including wheel style, diameter, color, and center cap placement, as well as tire tread pattern (2D or 3D), sidewall, diameter, tire width/height, Department of Transportation, and text checks. The system can also check balance ID mark alignment position; the orientation of features of the wheel, tire, and logo; the color, style, and orientation of the center cap; and the gloss level of the tire.
By storing over 100 wheel models and combinations, the wheel and tire validation system simplifies any changeover in production the manufacturer may require. By solving all the challenges of wheel and tire validation, the system’s user interface accommodates all technician skill levels, making it possible to train technicians in approximately 15 minutes.
Many industrial environments have come to rely on turnkey vision solutions, including smart cameras, PC systems, vision-guided robots, and custom libraries. These systems, which leverage advances in technology while easing integration, are designed ready to install and operate — a critical factor when completing inspection tasks in a manufacturing setting.
Meet the author
Nick Tebeau has worked for more than 20 years in automation within his current role as the director of sales for Business Segment Factory Automation for LEONI Engineering Products & Services Inc. Tebeau started the Vision Solutions Group at LEONI and currently oversees the Machine Vision, Robotic Cable Management, Training, and Programming Services sales in North America.