Search
Menu
Opto Diode Corp. - Opto Diode 10-24 LB

New Smart Cameras Provide Quality Control in a Box

Facebook X LinkedIn Email
Max Larin, Ximea GmbH

PC cameras are compact vision systems that can deliver up to 90 gigaflops of processing power to support full image-processing libraries and sensor resolutions of 5 megapixels or more.

If the manufacturing industry of the 21st century had a bumper sticker, it would read: “More for less.” Manufacturers are always trying to make more products from raw materials, get more production out of fewer workers and experience fewer product returns with more sales.

And the machine vision industry has done its best to answer manufacturing’s clarion call. Machine vision – or the technique of connecting an industrial camera to a computer running specialized image processing software for the purposes of tracking production, guiding robots and motion, and/or quality control – also is doing more with less. In fact, starting in the 1990s, machine vision suppliers launched a new class of machine-vision-systems-in-a-box called the smart camera.

Unfortunately, those smart cameras weren’t very smart. Slow microprocessors limited sensor size, memory, image processing functions and network options. The underlying problem wasn’t just silicon-based, but, instead, was – and still is – heat. A big selling point for smart cameras is that, unlike embedded or PC-host systems, they do not have moving parts, making them more robust for industrial applications that require long lifetimes with little to no maintenance.

Thanks to new classes of ultralow-power and heterogeneous microprocessors, the smart camera is finally living up to its name. The latest designs offer most, if not all, of the functionality of a personal computer, including a full (and familiar) operating system (OS), full image-processing library (or libraries), multimegapixel sensors, low latency for industrial applications and much more.

When it comes to manufacturing quality, more for less has finally arrived.

Back to school

All machine vision systems have four basic elements: a camera and light to acquire images, software to extract actionable information about the objects in the images, and a computer to run the image processing software.

Smart cameras were the first product to include three (and sometimes all four, including light) of these functions in a unified housing. Compared with PC-host-based machine vision solutions, they offered smaller sizes and costs. Some manufacturers also attempted to simplify programming and operation by moving to object-oriented programming interfaces, masking a reduced set of image processing algorithms and functions.

By 2006, microprocessor technology and compact flash memory had advanced to the point that smart cameras – or PC cameras – such as Sony’s XCI-SX1 with Geode processors could generate 1000 megaflops (Mflops) of data processing speed – enough power to run a full Windows operating system and full image-processing library.

However, megahertz-speed microprocessors meant that smart cameras realistically could process VGA-resolution images only by using the latest, most efficient algorithms. Thus, either the smart camera had to have a very small field of view, or defects had to be relatively large to be visible in the VGA images. Also, because of the low processing power and high overhead of modern operating systems, the manufacturing process had to be relatively slow, running at dozens of parts per minute rather than at hundreds or thousands.

Despite these limitations, smart cameras often appeared to be easier to set up and install for simple applications. However, because they were frequently marketed to users with little or no machine vision expertise, they often failed – especially those installed by in-house engineers or nonvision experts. Customers wanted to do more with less before the smart camera was really “smart” enough to complete the task. This level of smart camera eventually would be dubbed the “vision sensor” and is still used widely today for presence/absence and for measuring holes and other features, along with other simple machine vision tasks.

Embedded machine vision systems such as 4Sight from Matrox in Dorval, Quebec, or the Embedded Vision System from National Instruments in Austin, Texas, seek to bridge the gap between the vision sensor and PC-host system by putting a PC in a proprietary box and – in most cases – allowing the customer to choose the camera supplier. Embedded vision systems are basically industrial PCs that the vision supplier has optimized and committed to supporting for several years. Compare this with a PC-host system where internal boards (and software drivers) can change every few months – creating compatibility conflicts with image processing libraries, OS, plant floor automation systems and so on – and you can see the benefits of an embedded vision system when it comes to support.


Ximea’s Currera-G uses the AMD Fusion APU, combining both CPU and GPU on a single die. The Fusion’s ability to deliver up to 90 Gflops with only 18 W of thermal design power means that a fully functional industrial PC can be encased in an industrial camera housing without the need for fans or other moving parts that are prone to failure over time. The full complement of available I/O interfaces was designed to accommodate most major image processing libraries on the market. PLC = programmable logic controller; FPGA = field programmable gate array.


As with PCs, embedded vision systems typically have more powerful microprocessors, although they do not include “bleeding edge” chip sets. And, in most cases, they require active cooling. An embedded system, a two-box solution, takes up more production floor space than a smart camera and typically has active cooling – that is, fans and moving parts. This opens the door to potential mechanical or electronic failures.

Atom, fusion and the big bang

Microprocessors – and smart cameras – took a big step toward closing the smart camera/PC-host gap in 2008 when Intel announced the new Atom microprocessor designed for netbooks and Internet devices based on 45-nm lithography technology. By shrinking the size of the circuits on the microprocessor, Atom could deliver about half the performance (2 to 3 Gflops) of a Pentium M class PC, or an order of magnitude more than the Geode predecessors used in the first PC camera models.


Until now, transistor budget constraints typically mandated a two-chip solution for CPU and GPU functions, forcing system architects to use a chip-to-chip crossing between the memory controller and either the CPU or GPU. These transfers affect memory latency and consume system power. The APU’s scalar x86 cores and single instruction, multiple data (SIMD) engines share a common path to system memory to help avoid these constraints.


But just as important as performance are power consumption and associated heat generation. The Atom microprocessor consumes 20 percent less power than a Pentium M class at full speed and considerably less during idle times, allowing the unit to cool faster and better than previous models.

In 2011, Intel upped the ante by adding a graphic processor unit (GPU) to the x86-based CPU, while AMD joined the fray with the Fusion accelerated processing unit (APU), which, as with the new Atom E6xx class microprocessor, places a GPU core on the same die as the CPU. Using Fusion’s 40-nm lithography technology, the latest PC cameras now can deliver up to 90 Gflops of processing power, more than enough to challenge any single-core PC-host vision system.

Sheetak -  Cooling at your Fingertip 11/24 MR

Of course, microprocessor development never stops. PC camera users won’t have to wait long for significantly improved performance. Very soon, aggressive PC camera makers will deliver up to 480 Gflops using AMD’s new A-Series APU, announced in August 2011. Combined with new or enhanced network protocols such as Intel’s Thunderbolt technology, GigE and 10GigE, and Cam Express and zero-copy transfers, among others, PC camera users can “slave” multiple cameras to a single PC camera, potentially taking the percentage of machine vision applications that can be served by PC camera technology from roughly 80 percent to well above 90 percent of all machine vision applications.


Graphic processing units are specifically designed for computationally intensive tasks. Adding a GPU core to a die with a CPU greatly reduces the computational load on the CPU, increasing overall processing speed and decreasing latency – important considerations for industrial-quality control systems such as machine vision PC cameras.

Size matters – but not the way you think

Not only does the PC camera run a full image-processing library and OS, but its smaller footprint also means that data travels from the sensor to the processor faster than on a comparable PC-host system, reducing latency and jitter (dislocations in the image) between image acquisition and processing. The image transfer speed and data integrity from a remote camera to a PC or embedded vision are limited by the cable bandwidth, length and electromagnetic interference. As those with an integrated webcam on their laptop can attest, integrated cameras can work much better than USB or FireWire remote-camera-based systems.

Unlike standard PC-host machine vision systems, which come with consumer-based operating systems, PC cameras can run full or embedded OS. Matrox Iris GT and the BOA PC cameras from Teledyne Dalsa in Billerica, Mass., for example, run embedded Windows, while CheckSight from Leutron in Glattbrugg, Switzerland, and Ximea’s Currera PC cameras can run on either full or embedded Windows or Linux OS, giving developers the choice to develop systems in familiar environments. Embedded OS use a component architecture that allows the PC camera maker to choose only those features necessary for system and network support.


The Currera-G PC camera from Ximea houses a single-board computer built around AMD’s new Fusion APU, which combines the power of both CPU and GPU cores on a single die to deliver up to 90 Gflops of processing power.


OS modules – such as legacy support for older applications or various application programming interfaces (APIs) for Internet Explorer, Media Center and other nonessential programs – can be eliminated using an embedded OS. This reduces latency and demands on the CPU while increasing the PC camera’s overall processing throughput. An embedded machine vision system or industrial PC-host system also may come with an optional embedded OS; however, an industrial PC with a multimegapixel camera costs more than a PC camera machine vision solution. It also still uses cables and bus interfaces that slow image transfer speed between camera and processor, complicate system integration, and increase the chance of data loss during transfer.

Unfortunately, even an embedded OS is not a “real time” operating system, which means that determinism – or the assurance that data will be at a certain point at a given time – varies depending on computational load and other factors. Although determinism is improved through a PC camera architecture that puts all components in proximity to one another and uses onboard interfaces rather than cabling and back planes compared with PC-host systems, the additional processing power of a PC camera allows vendors to include real-time industrial field bus interfaces. An onboard programmable logic controller can guarantee nanosecond-level determinism when communicating between the PC camera and downstream ejectors and other industrial equipment.

No one owns you

The benefits of PC cameras don’t stop with better resolution and usability, higher processing speed and lower latency; PC cameras also change the paradigm between customer and vendor.

For example, PC camera customers who buy from Ximea do not have to buy image processing software from the same company from which they buy their hardware. An integrator may, for example, be a proponent of Teledyne Dalsa’s Sherlock image processing software. PC-class processing power and a full or embedded OS have allowed some PC camera makers to offer APIs or API design tools to make their hardware compatible with a customer’s favorite machine vision software.

As an extra benefit, PC cameras built on x86 microprocessors can run users’ existing Windows-based inspection routines and associated code. Ximea, for example, offers free APIs to Cognex’s VisionPro, Matrox’s Imaging Library (MIL), National Instruments’ LabView, MVTec’s Halcon and dozens more. This step toward greater compatibility for vision technology is a big deal for customers because integrators tend to sell the machine vision hardware and software they know, which may or may not always be the best solution.

PC cameras are quickly closing the support gap between “smart cameras” and PC-host vision systems (sorry, embedded vision system suppliers). As we all know, industrial customers demand that their equipment last longer and have better support than consumer systems, which is why an industrial PC costs more than a desktop PC. This is a double-edged sword when it comes to comparisons between PC-host systems and PC cameras.

PC cameras, and all machine vision systems, are designed for industrial product lifetime support in excess of seven years, while consumer PCs’ hardware and software configurations change every week, creating a potential support nightmare for machine vision providers. Although industrial PC cameras will fail less and perform better than consumer-based platform solutions because the software and hardware are better integrated and supported, troubleshooting PC camera hardware can be more difficult because these highly integrated systems are designed to be disassembled only by factory personnel.

Fortunately, the full OS capabilities of a PC camera provide an answer by including full-network, encrypted Internet and browser support that marks a major improvement over traditional smart camera remote-support solutions.

In today’s global economy, improved remote support is a must for machine vision providers and customers alike, with lean operations unable to withstand unexpected downtime.

In the future, PC camera vendors will conquer the last “benefit” of PC-host systems compared with PC cameras: hardware support. It’s not a stretch to imagine PC cameras with “snap in” modular designs that allow the user to replace a failed motherboard, to increase the sensor size or to add a higher-speed network interface.

Imagine being able to repurpose a PC camera for a high-resolution operation simply by snapping out the sensor box and replacing it with a larger array.

Science fiction? Just wait.

Meet the author

Max Larin is CEO of Ximea GmbH of Münster, Germany; email: [email protected].

Published: September 2012
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
Basic SciencecamerasenergyFeaturesimage processingImagingindustrialmachine visionmanufacturingMax LarinPC camerasSensors & Detectorssmart camerasvisionXimea

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.