The Myths and Realities of Image Acquisition
Ilias Levis, Sony Electronics Inc.
Not too long ago, machine vision application users relied on frame-grabber-based systems for most of their tasks. In 1998, manufacturers introduced cameras that worked with the IEEE-1394 interface, enabling a direct connection between industrial cameras and standard PCs and eliminating the need for a frame grabber card.
Initially, it was thought that this type of user-friendly interface, including USB 2.0 and Ethernet, would address a limited number of applications. As it turns out, the majority of digital cameras that are sold today in the North American machine vision market have interfaces that do not use frame grabbers.
Recent technological advancements — in particular, faster, smaller and less power-hungry processors — also have contributed to new generations of vision products, including smart cameras and sensors. Furthermore, the cost of digital and smart cameras is now comparable to that of traditional analog camera/frame-grabber systems.
Figure 1. Sony’s SCI series of smart cameras incorporate a complete PC with ×86 architecture and XPe as an option. Here, a camera running with National Instrument Corp.’s Vision Builder for Automated Inspection performs a connector inspection.
As a result, the machine vision market is very different from that of the late 1990s, with vision providers offering a wide variety of products, including analog and digital “conventional” cameras, frame grabbers, simple vision sensors and smart cameras with complete computers built in. Without question, the selection process has become significantly more challenging.
Users should keep several criteria in mind while choosing the appropriate vision system for an application. The first group of criteria is related to feasibility and performance; in other words, minimum specification requirements and performance considerations.
SpecificationsThe field of view and the detail size that must be resolved within an image will determine the type of sensor and the number of pixels required. In situations where large surfaces are to be inspected — such as paper at a mill — a line-scan camera typically is used.
The vast majority of line-scan cameras have a Camera Link interface and require the use of a PC-based system with a frame grabber. On the other hand, most mainstream machine vision applications use area-scan cameras that have a variety of analog and digital interfaces. An increasing number of these cameras use digital interfaces such as IEEE-1394 and USB 2.0, predominantly resulting from simpler cabling (measured by size, installation and cost) and from the fact that no frame grabber is needed.
Table 1. Bandwidth is a key selection criterion when choosing a camera system for a machine vision application.
Frame rate, in conjunction with resolution and the number of bytes per pixel, determines the amount of data that is transferred and processed. For example, a camera with a 1280 × 1024 resolution at 30 fps and 8 bits (or 1 byte) per pixel requires a data bandwidth of approximately 100 MB/s. This is close to, but does not exceed, the PCI bus limit and is within the capabilities of IEEE-1394b or Gigabit Ethernet interfaces.
If the same application needed 10 bits (2 bytes) per pixel, the bandwidth requirement would jump to 200 MB/s, and the recommended solution would be a frame-grabber-based system with a Camera Link interface.
In general, there is an increasing trend for vision developers to use a frame grabber only when the bandwidth requirements of the application are too high to be accommodated by other standard digital interfaces.
Certain machine vision applications require that every frame be transferred reliably and in a timely fashion to the host computer for processing. Vision users who consider deterministic data transfer as their first priority prefer to use systems that have analog or digital cameras connected to capture boards with onboard memory buffers that guarantee lossless image transfer. Among the non-frame-grabber interfaces, IEEE-1394, through the instrumentation and industrial digital camera standard, offers good provisions for reliable isochronous image transfer with minimal usage of the central processing unit.
System architecture
The speed and complexity of the image-processing algorithms used to perform specific vision tasks will determine the computer specifications and, essentially, the feasibility of using a smart camera instead of a PC-based vision system. It could be misleading, however, to judge feasibility and processing performance by considering only the central processing unit clock frequency. Many other factors, such as architecture, flash and RAM memory, and display capabilities, also are important. If it seems like a close call, the only way to draw a safe conclusion is by testing the specific code or application directly on the device of interest.
In many cases, vision systems can share a PC with other systems — for example, motion control and automation — in the same machine or location. As a result, it is easy and cost-effective to add a capture board to the existing PC or to directly connect an IEEE-1394 or USB 2.0 camera rather than a smart camera, especially when more than one imager is needed.
However, in situations where very limited space is available for a PC tower in the vicinity of the vision system, a smart camera or an embedded-vision PC appliance becomes an attractive option. Smart cameras also can make the system more adaptable by offering independent, self-sufficient image-processing stations for each inspection point.
In some cases, digital cameras with interfaces that do not require frame grabbers offer a couple of input/output lines as part of their design. However, vision systems that require multiple, rapid response input/output lines call for the use of frame grabbers or devices such as smart cameras that either offer this feature as part of the package or can accommodate additional input/output through external accessory boxes.
Figure 2. When comparing analog with digital vision systems, note that the frame grabber framis not necessary for IEEE-1394, USB 2.0 or Ethernet interfaces.
The software component of a vision system is becoming the most important factor in determining the amount of resources and time that need to be invested. Simplifying this effort is a clear trend in the machine vision industry.
It is commonly accepted that it is easier to deploy a system architecture that is based on a network of smart cameras with distributed processing rather than a single, powerful PC with a central-processing approach. With the latter, vision developers could be burdened with the need to perform extended optimization of code, sometimes involving the complex task of streamlining multiple processes and using parallel processors.
Analog vs. digital
Digital cameras have the reputation of offering better image quality than analog models. However, how can one define the term “digital”?
A vision system is considered to be digital when the entire image transfer process — from pixel values in the sensor to buffering memory in the PC for later processing — is void of any analog video signal transfer through an external cable. In other words, systems with cameras that perform digitization through an analog-to-digital converter before outputting the image are considered digital. Naturally, the digital category includes smart cameras and sensors because the images are digitized internally before being processed.
Analog camera systems cost less, on average, than digital systems and are built using a single, well-established interface. On the other hand, digital cameras have the advantages of additional features and performance, of high-resolution options, of high frame-rate capability and of image quality. The cost gap between these two technologies is rapidly closing.
Cost issuesMany of the factors mentioned here have a cost dimension: system architecture, development complexity and analog versus digital technology. There are additional cost considerations, such as:
• Maintenance: The less complex the software and hardware implementation and the more decentralized the processing, the easier it is to maintain the system over the long term. Therefore, building a system based on digital interfaces — without a frame grabber requirement or with smart cameras — can minimize maintenance cost and complexity.
• Number of cameras per system: Typically, the more cameras that are required to perform a single vision task, the more cost-effective using frame grabbers becomes. When more than two cameras per system are needed, and resolution requirements are low, most users are apt to select frame grabbers that support lower-priced analog cameras since cameras become the main cost component. As the average price of digital cameras decreases over time, this trend will phase out.
• Perception: Subjectivity plays a big role in the decision-making process, although it is hard to admit in most cases. Examples of common perceptions in the US market are that analog systems cost less than digital and that smart camera architecture is the way of the future. Despite the fact that there is truth in these statements, it is dangerous to generalize.
It is hard to map one-on-one vision system types to specific applications. Any one of the selection criteria mentioned above has the potential of dictating one solution over another. In general, the right choice should be made based mainly on three factors: feasibility, ease of use and cost.
The system manufacturer’s reputation for product quality and support also should come into play. When the critical decision of selecting the image-acquisition scheme for a vision system has to be made, it is highly recommended that one first lay out detailed system requirements and then let the facts lead to the right choice.
Meet the authorIlias Levis is product manager of visual imaging at Sony Electronics Inc. in Park Ridge, N.J.; e-mail:
ilias.levis@am.sony.com.
LATEST NEWS