Search
Menu
Spectrogon US - Optical Filters 2024 LB

Picking a Camera, for Control and Profit

Facebook X LinkedIn Email
Hank Hogan, Contributing Editor, [email protected]

Seeing is believing – and seeing is also control, in the case of industrial applications. An example is anti-sway technology from SmartCrane. Software from the Poquoson, Va.-based company provides control of crane motion. This technology allows safe moves at maximum-rated speed every time with minimal operator training, according to the company.

Sway can arise because a load may be picked up at an angle, due to wind or some other factor, noted Joseph Discenza, SmartCrane’s president and CEO. This causes a weight suspended by what can be a long cable to move in unwanted ways. The key to countering sway is measuring it, and for that Discenza first tried determining the deflection at the top of the cable but found that didn’t work. So now SmartCrane employs a machine vision system and a black-and-white pattern attached to the load.

Multiple code reading of both 1-D and 2-D codes using specialized machine vision setup.
Multiple code reading of both 1-D and 2-D codes using specialized machine vision setup. Photo courtesy of Cognex .

“We use a search algorithm to find that pattern, that black-and-white pattern,” Discenza said. “Whenever you have sway, you have a pendulum period. You have a sine wave. What you need to be able to do is determine what that sine wave looks like.”

After as little as a quarter-cycle of this periodic motion, the software has enough data to allow small moves at the proper time to damp out sway. As a result, a load dangling at the end of a cable gets to the right place.

The earliest incarnation of the camera system had its lens jarred loose and shattered due to vibration, Discenza said. That problem has been solved, but issues with lighting, which can vary greatly in an outdoor setting, are still around.

Related: How cameras can keep cranes from swaying

As this example illustrates, cameras and vision systems for industrial control have to be sturdy and provide clear images, despite vibration, dust and sometimes widely varying lighting. Users implementing such systems have more choices than before, with increasingly powerful image processing and software able to cost-effectively solve tougher problems. Advances in consumer technology are also showing up in industrial applications.

But the fundamentals still apply, with the right lighting and proper definition of the problem topping what’s needed for success. The first in this list, lighting, is more than merely having enough illumination.

Attaching a high-contrast black-and-white target to a load allows a camera to detect movement and thereby cancel sway during movement.
Attaching a high-contrast black-and-white target to a load allows a camera to detect movement and thereby cancel sway during movement. Photo courtesy of SmartCrane.
 

“Moving a light five degrees or changing the wavelength of the light from blue to red or red to blue, or adding a filter can make a dramatic difference in the success or failure of an application,” said Rick Roszkowski, senior director of marketing for the vision products unit at camera and vision system maker Cognex of Natick, Mass.

“With vision, lighting can be critical to success,” he added.

To get the lighting right, he recommends working with an experienced system integrator. Such integrators have the necessary expertise and tools, which can include lighting techniques and equipment that have proved successful in the past.

Achieving success may require iteration to get parameters right. For instance, implementing a vision system for inspection of products on a manufacturing line may result in more parts being rejected, thus requiring the pass/fail criteria to be adjusted.

In selecting a vision solution, space constraints, wiring demands and longevity of critical components can factor into a decision. So, too, can the basic system architecture. Vision solutions can be a PC-based system with GigE cameras, vision appliances that connect to one or more cameras, or smart cameras that provide a distributed processing model.

Inspection of parts moving along a conveyor, carried out using a machine vision system and appropriate illumination.
Inspection of parts moving along a conveyor, carried out using a machine vision system and appropriate illumination. Photo courtesy of Cognex.



No matter the vision architecture, there can be substantial benefits beyond what was originally contemplated. For instance, machine vision may be used to accept or reject parts based upon measurement of some critical dimension. However, embedded within this measurement data are indications of tooling drift and other process trends. Teasing out such information could be useful, but it may require significant information technology resources.

It’s not wise simply to compare the cost of a bill of materials between the different architectures, Roszkowski cautioned. There can be substantial cost differences in the engineering solution, machine integration and maintenance of a system – none of which may be apparent in the initial system cost.

The question of cost can be quite complex, noted Ken Brey, technical director at system integrator DMC of Chicago. For example, a color camera can be expensive in multiple ways. For one thing, the camera itself may be costly. Also, color cameras use a Bayer filter, which overlays red, green and blue filters on pixels in a 1:2:1 ratio. Because it interpolates between pixels, a color camera has half the resolution of a monochrome equivalent.

Lambda Research Optics, Inc. - CO2 Replacement Optics

There are other hidden cost differences between color and monochrome cameras, Brey said. “Typically color cameras transmit 32 bits of data per pixel. Monochrome cameras are usually configured for eight bits per pixel. That’s four times more data for the camera connection, PC memory, processing and image storage. If you don’t have a need, don’t incur these costs.”

Another example of hidden and potentially unexpected expenses can be seen in the sensors found within cameras. The sensor resolution may be higher than is actually usable, particularly for smaller sensor sizes. High pixel counts in small sensors mean the pixels themselves are small, which decreases sensitivity and increases noise artifacts. Higher-performance lenses may be required, focusing to 300 line pairs per millimeter rather than the more common and less costly 150 line pairs per millimeter. The result is that the lens may end up costing more than the camera, Brey said.

DMC partners with National Instruments, an Austin, Texas-based instrumentation and vision processing equipment supplier. The company itself does not make industrial cameras, but it does work with many companies that do. Advances in the consumer space benefit industrial applications, said Nate Holmes, product marketing manager for machine vision at National Instruments. Consumer devices are dropping in price and adding capabilities in response to expanding use in phones and cars. What’s more, the arrival of the Internet of Things is driving the need to process and extract information from images faster than before. Factory floors and other automation or control venues are riding this consumer-camera technology wave.

Machine vision inspection reveals whether a cookie meets quality requirements.

Machine vision inspection reveals whether a cookie meets quality requirements. Photo courtesy of National Instruments.



Core specifications in an industrial setting are frame rate, resolution, sensitivity of the sensor and its noise, Holmes said. However, what’s outside the camera may be more important than what’s inside. Uniform lighting and good optics, for instance, may allow a less costly camera to outperform a more expensive one.

“The work you can do even before you take the image really ends up going a long way,” Holmes said. “That ends up reducing the burden on the processor and the algorithms and all the postprocessing.”

Many options exist with regard to lighting location, color and type, as well as variations that involve how diffuse the light is. Other lighting choices can include the use of a strobe instead of a constant illumination. Because there are so many different possible variations, prototyping, which can be extensive, may be unavoidable, Holmes said.

He noted that another contributor to success is the camera bus, which is how the device interfaces to the rest of the system. That interface should be a standard that offers enough bandwidth to handle vision requirements.

Lastly, Holmes advised looking at the processor, which should have enough horsepower to run vision algorithms at the rate required to keep up with the manufacturing or control process. FPGAs (field programmable gate arrays) are inherently parallel and so match up well with many vision problems. The experience of National Instruments is that using FPGAs can result in a ten- to twentyfold improvement in processing speed.

As part of an automated machine vision inspection system, cameras can help solve quality, yield or production ramp-up issues.
As part of an automated machine vision inspection system, cameras can help solve quality, yield or production ramp-up issues. Photo coutesy of IDS.



Beyond any specifications for the camera and vision system as a whole, it is important to document and analyze the objectives of any project, said Tom Hospod, North America sales director for camera and optical component supplier IDS Imaging Development Systems of Obersulm, Germany. These goals can then be compared to what is possible with current technology. If there is enough of a match, then machine vision could help solve quality problems, yield issues or provide a way to ramp up production by eliminating costly and time-consuming manual inspection. This assessment can also provide a benchmark to evaluate comparative costs, which can include such factors as a loss of reputation in the event of a quality problem.

“The cost of implementing automated machine vision inspection on a given production line must always be weighed against the cost of not integrating such practices,” Hospod said.

A final bit of advice comes from SmartCrane’s Discenza. Today, the key to a solution typically doesn’t lie in hardware, thanks to the availability of what he characterized as roughly equivalent and adequate-to-the-task image capturing setups. Instead, the key difference comes in what happens after the image is taken.

“It comes down to the processing and the software,” Discenza said. “To me, the big thing is, ‘Is there a reasonable software development system?’ ”

Published: January 2015
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
positioning
Positioning generally refers to the determination or identification of the location or placement of an object, person, or entity in a specific space or relative to a reference point. The term is used in various contexts, and the methods for positioning can vary depending on the application. Key aspects of positioning include: Spatial coordinates: Positioning often involves expressing the location of an object in terms of spatial coordinates. These coordinates may include dimensions such as...
camerasFeaturesFilterslensesmachine visionOpticsImagingSensors & DetectorsMaterialsindustrialpositioningAmericasAnti-sway technologySmartCraneCognexvibration controlsystem integratorvision systemmanufacturing lineKen BreyDMCNational Instrumentsautomated machine vision inspectionHank Hogan

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.