Search
Menu
Meadowlark Optics - Wave Plates 6/24 LB 2024

Making Machines See

Facebook X LinkedIn Email
When it comes to machine vision and computerized sight, C3PO is nowhere to be found. A system that can handle variable lighting and a changing 3-D world is still the stuff of science fiction, but ongoing advances are making such robots a bit less of a fantasy.

Hank Hogan, Contributing Editor

Today, machine vision is associated with industrial automation, where vision systems help with the manufacture and inspection of products. Innovations are making machine vision solutions smaller, more powerful, easier to use and less expensive. Vendors are adding such features as the ability to look around corners, to see in color and to capture three dimensions. The latter has been a long-sought goal.

“Vision has to be 3-D,” said Takeo Kanade, a leader in computer vision research and a professor of computer science and robotics at Pittsburgh’s Carnegie Mellon University. He, like others in the computer and machine vision fields, bases his beliefs on the fact that the world is three-dimensional.

Besides the 3-D goal, machine vision vendors are working on a variety of nearer-term targets. A look at the efforts of such Canadian companies as Coreco Imaging Inc. of St. Laurent, Quebec; Dalsa Corp. of Waterloo, Ontario; and Matrox Imaging of Dorval, Quebec; as well as Electro Scientific Industries (ESI) of Portland, Ore.; PPT Vision Inc. of Eden Prairie, Minn.; Redlake MASD Inc. of San Diego; and Tyzx Inc. of Palo Alto, Calif., will illustrate these trends and highlight interesting innovations.


Eclipse (EC-11) line-scan cameras perform high-speed wide-web defect inspection. Courtesy of Dalsa.


Bigger, better CMOS

Although the machine vision systems are getting smaller, their sensors are growing in terms of number of pixels. They’re also increasingly CMOS-based, a change from the days when the sensors were almost exclusively CCDs. However, the new sensor technology doesn’t work for every application, and not all vendors are pursuing this approach with the same intensity.

For machine vision vendors, the increase in the number of pixels solves some problems. Greg Combs, an applications engineer with CCD camera vendor Redlake MASD, said that his company is one of the vendors that does not embrace CMOS technology. It has looked at alternative sensors but for now plans to continue using a CCD-based system and to concentrate on high-end applications. The company uses a Kodak 6.3-megapixel full-frame CCD sensor because it enables a single camera to handle flat panel inspection chores that otherwise would require multiple cameras. This inspection task involves close scrutiny of a large area.

“That is solvable with [a] larger sensor, a higher-resolution sensor,” Combs said.

However, gathering more data creates its own problems. As the size of the captured image grows, the bandwidth demands increase both inside and outside the vision system. Inside PC-based systems, innovations such as PCI-X will ratchet bus speeds from the 133 Mb/s of PCI up to 1 Gb/s. Externally, higher-speed data protocols with rates upward of 400 Mb/s, such as FireWire, USB2 and Gigabit Ethernet, will be employed to move the mountain of data around.

Other areas also will feel the image crush. Mike Kelley, director of marketing for ESI’s vision products division, noted that his company has developed a machine vision sensor that captures a 50-MB image. Such a sensor could be used for close inspection of large areas. Storing 200 images of 50 MB will consume 10 GB. That may be only several hours of production, so storage could be another concern. The amount of data that will be accumulated clearly depends on the application, and system storage will have to be sized appropriately. Fortunately, magnetic storage is doubling in capacity roughly every year, so this issue can be accommodated.

Kelley said that ESI is working on several projects that use CMOS technology. He thinks the lower cost and the ability to integrate a processor and an image sensor on the same piece of silicon are two key advantages that will cause CMOS to gain ground in the sensor war. This has implications for the machine vision market, particularly in applications that can be served by smart cameras and even more highly integrated products. “Long term, single-chip ‘smart’ imagers will proliferate because of these advantages,” he said.

The trend toward CMOS imaging is also apparent in Europe. Don Braggins, a 20-year industry observer and founder of Machine Vision Systems Consultancy in Royston, UK, noted that more than a half-dozen European companies offer significant CMOS products. These range from a 14-megapixel sensor from FillFactory NV of Mechelen, Belgium, to a closely coupled sensor and specialized processor from Fastcom Technology SA of Lausanne, Switzerland.

The latter’s coupled device allows the selective readout of pixels and enables 2-D matrix code readings up to 120 times per second. Braggins, aware of recent history regarding the sometimes hidden conflicts of interest involving analysts, quickly noted that he has a financial interest in Fastcom.

The reason for this flurry of CMOS activity, he said, lies in the research possibilities of the technology and the easy availability of small-scale production. These characteristics were important to the origins of this sensor technology. “A lot of the CMOS work originally came out of academia,” he said.

The systems themselves are not only shrinking but are also becoming less complex. Redlake, for example, has just introduced a megapixel camera, the ES1020, that is capable of 48 frames per second, measures less than 57 mm (2.24 in.) on a side and will sell for less than $4000.

Making Toasters

At ipd in Billerica, Mass., the Intelligent Products Div. of Coreco, company officials talk about vision appliances, devices aimed at the end user, such as a manufacturing engineer responsible for a production line. Because these end users are neither machine vision experts nor systems integrators, ipd attempts to create the equivalent of a machine vision toaster, a device that is not difficult to operate and that is intended to meet the needs of a specific task or application.


In these images of an electronic component on a printed circuit board, the Bayer image was converted to RGB using Coreco Imaging’s algorithm (top) and with the standard 3 x 3 averaging (bottom).


Company officials point to the recently released iGauge as an example. The product, which sells for $3000, consists of an intelligent camera and specialized tools for gauging applications such as checking dimensions on machined parts, holes on drilled parts or pins on electrical connectors. “The major focus at ipd is to make imaging easy, to make vision easy,” noted Ben Dawson, the division’s director of strategic development.

Other machine vision system vendors, such as PPT Vision and Matrox, report similar simplification efforts. They may not have gone as far as Coreco did in setting up a separate division, but many companies appear serious in their pursuit of vision appliance applications.

John Vieth is director of product management and marketing for Dalsa’s vision for machines division. He noted that this trend toward simplification and lower cost has its roots, at least partially, in component improvements and innovations. One example is the use of flat-field correction, which improves image quality by removing spatial noise. In the past, it would have been an added function found outside the camera, something that a vendor or systems integrator would have supplied. Today, this technology is increasingly found in the camera itself. “It’s likely camera designers will continue to develop functionality along these lines, allowing systems designers the ability to build better systems at lower costs and with fewer components,” he said.


The iGauge was developed to provide a machine vision device that is easy to use and that meets the needs of a specific task or application. Courtesy of ipd.


Seeing around corners

Machine vision systems are acquiring new capabilities. Many of these have been around in some form for years, but recent improvements are making the techniques more practical. Take, for example, the use of prisms and mirrors.

Inspecting semiconductor parts is one of the main applications of machine vision. This task requires a top view of the leads to make sure they’re all present. It also requires a lateral view to make sure that the leads lie in a plane. If leads are missing or askew, mounting the device to a printed circuit board may be impossible. With the proper use of mirrors and other optical techniques, a single camera with enough speed can sequence through an entire parts inspection. “The mirrors are placed so that you can snap an image and can see not only the top view of the leads, but also the tips of the leads from the side,” explained PPT Vision spokesman Chuck Bourn.


A circuit board substrate undergoes inspection with a digital camera and LED ringlight. Courtesy of PPT Vision Inc.


This saves the cost of a second camera and the trouble of synchronizing the two to ensure that each is looking at the same part. The same techniques can be used for any applications, such as machining parts, where checking attributes demands different viewpoints.

Sheetak -  Cooling at your Fingertip 11/24 MR

Another added capability is color. Machine vision systems that can see the world in more than shades of gray have been around for some time. To see in color, some pixels capture red, some blue and some green. By combining these, a truer, multicolor picture of the world emerges. This can be useful, for example, in defect inspection. Color may provide clues to the type and origin of any problems discovered.

If separate sensors are used to accomplish this, the distinct and physically separate pixels have to be effectively stacked atop one another. An alternative approach is a single sensor covered by a color filter array. The sensor captures only a partial image, and the missing data has to be inferred.

One of the most popular color filter arrays is a Bayer filter, which uses a checkerboard pattern with the number of green pixels twice that of red or blue. Green is a part of the spectrum that human eyes respond to well. The Bayer filter approach provides a distinct advantage for machine vision.

“The main gain from the camera point of view is the cost of it,” said Pierre Lafrance, an OEM applications engineer at Coreco. Eliminating separate sensors and accompanying optics for each of the primary colors cuts the overall cost of a two-megapixel color camera from roughly $25,000 to less than $5000.

Cost is the reason that Bayer filter cameras are becoming increasingly popular and that correction schemes to re-create the missing data are in greater demand. Coreco touts its interpolation algorithm as offering better edge definition and greater frequency response than algorithms from its competitors. This helps in parts inspection tasks. To achieve this and to minimize impact on the rest of the system, the company uses a hardware-based algorithm, which Lafrance said improves image quality tenfold.

Although machine vision systems have added features and have improved significantly, they are not without problems. An example would be an application that involves rapid or unpredictable motions. One solution might lie in sensor fusion, the marrying of multiple data streams. A gyroscope might be used to monitor rotational and other movement. The information provided could be used to remove motion-induced blurring and improve image quality.

Another approach would be to include fiduciary marks, or beacons, in the camera’s environment as light sources to help pinpoint location and motion. Many machine vision vendors and researchers, such as Tom Drummond of Cambridge University in the UK, are working on solving this problem.

Likewise, many commercial concerns and academic labs are tackling the issue of 3-D information. Solutions include using lasers that essentially reach out and probe the environment. This can be done with a scanning laser rangefinder that maps an object. Such a technique is slow, said Carnegie Mellon’s Kanade, but the technology is mature.

Another method involves stereoscopic vision. In a manner analogous to the way human eyes operate, this technique takes multiple views and combines them to create a real-world image. This requires at least two imagers as well as a processor to extract distance information from the differences between the captured pictures and to make calculations with mathematical algorithms.

Tyzx is a small start-up dedicated to bringing 3-D vision solutions to various real-world problems. It has developed a chip architecture, called DeepSea, to handle the processing chore. It has deployed it in various systems, including a person-tracking solution based on inexpensive imager technology, the kind that might be used for low-cost web cameras. This is different from the approach of machine vision vendors, whose systems typically demand the highest-quality cameras available.


The image (right) is portrayed as gray-scale encoded distance measurements (left), which were generated at more than 80 fps by Tyzx Inc.’s DeepSea system.


“Because we are using 3-D information — i.e., the combination of results from two imagers — we can actually get extremely good results with some very inexpensive imagers,” said Ron Buck, the company’s president and CEO.

As for the future, researchers must tackle a final 3-D imaging puzzle. As explained by Kanade, the challenge is to understand a simple, everyday phenomenon. A person can close an eye and somehow still extract 3-D data, which seems mathematically impossible. One theory is that a human combines different views from different vantages at different times. This, in effect, creates a virtual second eye.

The computer vision research community has sought to understand how this structure from motion technique works and how to put it to use. However, no one is quite sure yet how to translate this human capability into something applicable to reliable and practical machine vision. Figuring that out will move the industry to its ultimate goal.



Envisions Stronger Market

For the machine vision industry, the recent past hasn’t been pretty, but the future looks promising. The difficulties aren’t surprising, given the state of the overall economy and the woes of the industries that consume machine vision products.

The primary areas for automated vision systems are in semiconductors and electronics, followed by food processing and pharmaceuticals. With the prolonged slump in the semiconductor and electronics arena, machine vision has seen some dark times.

“In 2002, the North American market for machine vision was $1.2 billion, a decrease of 15.4 percent from 2001,” said Vision Systems International principal Nello Zuech. The company, which is based in Yardley, Pa., prepares an annual machine vision market assessment and forecast for its clients. These include the industry’s trade group, the Automated Imaging Association.

Shirley Savage is president of The Thinking Companies Inc., a consulting group in Falmouth, Maine, that works with Frost & Sullivan to track the industry. She said that 2001 had a similar decrease compared with 2000. Thus, the machine vision industry has seen several years of double-digit declines.

Although that is disheartening, indeed, there are signs that various industry segments, such as semiconductors, are finally beginning to rebound. That good news is the basis for Savage’s admittedly conservative projections. “For the next three years, you can look at a 6 to 8 percent per year increase,” she predicted.



Let There Be Light

Hollywood had it right, according to machine vision experts. The phrase “Lights, camera, action” puts lighting first. Those in the machine vision field contend that controlled, dependable lighting is vital to getting consistently good results.

“Vision is much simpler, quicker and more robust if you get the lighting right,” said Don Braggins of Machine Vision Systems Consultancy in Royston, UK.

That’s one reason why companies such as PPT Vision Inc. in Eden Prairie, Minn., have developed their own lighting solutions. Company spokesman Chuck Bourn noted that advances in LEDs over the years have made possible systems that can reliably provide uniform illumination on command. As the output of LEDs has moved from red to green to blue to pure white, the machine vision industry has increasingly switched to solid-state illumination. LEDs are cheaper, more rugged and longer lasting than other lighting sources.


Continuous diffuse illumination eliminates the glare during inspection of a foil-wrapped pharmaceutical blister pack (right) as compared with inspection of the same package using a ringlight (above). Courtesy of RVSI/NER.


While this company provides its own lighting solutions, other machine vision vendors turn to products from such companies as RVSI/NER of Weare, N.H., StockerYale Inc. of Salem, N.H., and Volpi AG of Schlieren, Switzerland. Marcel Laflamme, vice president of sales and marketing at RVSI/NER, noted that his company alone has devised more than 200 models of illumination. These range from relatively simple backlights to more complex setups such as the patented Cloudy Day lights designed for applications targeting highly reflective, mirrorlike surfaces. The company’s latest ring illuminators offer red or white LEDs and a variety of other options.

As for the future, several trends are at work. As noted by Matrox Imaging’s product line manager Pierantonio Boriero, vision processing algorithms are improving. They’re better at handling nonuniform lighting, making lighting less critical. On the other hand, less expensive vision systems demand certain lighting component characteristics.

“Today, with the advent of lower-cost vision systems, the challenge is to develop lighting that is appropriately priced,” according to Laflamme.

Published: March 2003
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
3-DConsumerenergyFeaturesindustrialindustrial automationmachine visionSensors & Detectorsvariable lighting

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.