Search
Menu
Spectrogon US - Optical Filters 2024 LB

Robots Work Better with Vision Enhancements

Facebook X LinkedIn Email
Hank Hogan, Contributing Editor, [email protected]

Machine vision trends in use on (or coming soon to) the factory floor include 3-D and better resolution.

Industrial robots with mere vision can’t keep up anymore – now they have to image in greater detail and in 3-D.

Where these trends are headed can be seen in the latest research into autonomous vision-based docking. Robots equipped with this technology could have full-color 3-D vision accurate on a scale of thousandths of an inch, making it possible to recognize and verify parts and components on the fly.


A big trend in robotics for manufacturing is 3-D vision. Here, a 3-D vision capture system developed by Kawasaki Robotics measures and inspects parts.


For now, recent developments are driving vision capabilities and, therefore, usage. Of prime importance was the introduction of GigE cameras and power-over-Ethernet (PoE) technology, said Maximiliano Falcone, supervisor of product development for Kawasaki Robotics (USA) Inc. in Wixom, Mich.

One benefit has been longer signal runs, Falcone said. “Before, when it was an analog signal, you could only go about 30 feet. Now you’re looking at 100-meter (325-ft) lengths with this PoE camera technology.”

A second plus is the need for one less cable because power for the camera rides over the Ethernet connection. Fewer cables can be important when robots with many axes of freedom are twisting this way and that.

Another innovation has been the arrival of higher and higher pixel-count cameras. Today, megapixel cameras are appearing in industrial robots. More capable software and the ability to do more up-front calibration of cameras have improved the usefulness of these higher-resolution vision systems.

One thing such advances won’t change is the nature of the factory floor. There may be welding going on, causing intermittent bursts of light that a camera or vision system may capture. There are other often overlooked changes to ambient lighting that can bombard a sensor.


Advances in vision technology allow robots to do more than ever before.


“There’s traffic going around these plants, whether it be a fork truck or a golf cart. Almost all of them have a strobe on them. Some of those strobes are white. Some of those strobes are orange. Depending on what color strobe, it can affect things,” Falcone said.

Alongside higher resolution has come the deployment of 3-D vision systems. Some involve the use of two cameras for a stereo view of an object, where analyzing the slightly different images captured by the cameras yields depth information. Other systems use structured light. Here, the third dimension is extracted from distortions that an object creates when light of a known pattern is projected onto it.

A payoff of these techniques is that manufacturers can really take the measure of what they are making. In the past, they might only get four or five data points on a surface. With 3-D vision, they can get thousands of readings. The much-higher-density point cloud should make it possible to hold parts to tighter tolerances and, thus, lead to more precise manufacturing.

As for the future, technologies developed for cell phones look very promising, Falcone said. He is particularly interested in liquid lens techniques because they allow the smooth adjustment of a lens through a range of focal lengths. All that is needed is a small voltage change. Applying this to machine vision will enable a lens to home in on the best possible focal length for a given task, which will lead to the best possible image and the best possible output.

Applied Robotics Inc. of Glenville, N.Y., specializes in end-of-arm tooling for robotic systems. Some of its products involve vision and other photonic systems typically used for inspection. A robot will pick up the vision system, check incoming parts, establish reference points for subsequent processing, and then drop off the camera to pick up the actual tool that will be used for processing or manufacturing.

Three-dimensional vision is a trend largely because it solves a problem, said Applied Robotics applications engineer Henry Loos. With imaging in only two dimensions, incoming components arriving at an industrial robot work cell must be precisely arranged and oriented so that the robot can find the parts on which to work. Three-dimensional imaging enables more relaxed packaging.


Using a vision system, a robot identifies dice, picks them up and brings them back to the edge of a table. The same vision solution is used in an industrial setting to pick up components or to weld a bead to join two pieces of metal
.

“To have a 3-D vision system on a robot that can look into a bin, determine what orientation the part is that it’s about to pick up, adapt to it and pick it up makes the supply chain much more efficient and much less expensive,” Loos said.

Doug Erlemann is business development manager for vision products for the Southington, Conn.-based US office of the industrial camera maker Baumer Ltd. Most six-axis industrial robotics applications do not require anything more than a few frames per second speed, he said.

However, what is absolutely vital is that there is no downtime resulting from a vision system failure. The penalty for a stopped production line can run in the tens of thousands of dollars a minute, and robot makers understandably want to avoid that cost.

Meadowlark Optics - Wave Plates 6/24 MR 2024

The need for uptime explains why robot makers hate wires – another potential failure point – and prefer power-over-Ethernet technology. The same requirement also explains why Baumer cameras are sealed in enclosures to offer protection against dust and water ingress.


A robot uses a vision attachment to locate a part for subsequent processing, which is done by a work tool.


Another need explains a push toward higher resolution that Erlemann predicts is coming. Many vision systems today are used to fix a location in space. From then on, the robot ignores the sensor because it knows where the work product is and its relation to the robot. But in the future, cameras may be called upon to do more than simply locate a point. They could play another role as manufacturers put 2-D bar codes or data matrix codes on components, assemblies and subpanels.

“They need the camera to not only send the position to the software, they need to actually read that code,” Erlemann said.

Photonics also allows robots to see in other ways. A case in point comes from PaR Systems Inc. of Shoreview, Minn. The company makes robotic and materials handling equipment and systems. Its laser center of excellence in Pretoria, South Africa, produces a line of atmospheric and high-pressure transversely excited CO2 lasers.

Those lasers power robotic nondestructive inspection of composite materials used in aviation and elsewhere. The system consists of two lasers, one of which projects a pulse onto the object being inspected. The pulse duration causes the material in the object to vibrate ultrasonically, much as it might if tapped by a hammer. The resulting resonance is picked up by a second laser, allowing nondestructive examination of the material.

“It will show you if there is a crack or whether there’s an inclusion or a deformity of some sort,” said Carel Swart, laser center general manager.

The system is fast enough to quickly inspect components that measure meters, and it can scan curved parts. It uses optics mounted on robot arms to deliver the pulse and collect the resulting signal. The genesis of the technique came from a request by defense contractor Lockheed Martin, which had to devise a way to inspect the composite materials being used in the F-35 Joint Strike Fighter, now nearing the end of development.


A two-laser system performs nondestructive inspection of composite, the type of material found in advanced aircraft and other applications.


However, the inspection technique is not confined to aviation. It also is being considered for inspection of wind turbine blades, Swart reported.

In another application, robots equipped with the same lasers can remove paint and resins. This is useful because, although composite materials do not tolerate the traditional harsh chemicals used for paint removal, the Federal Aviation Administration requires repainting an aircraft’s surface every five years or so.

Fortunately, paint removal can be done using a CO2 laser mounted on an articulated robot arm. It removes paint, Swart said, “causing no harm to the substrate and no temperature transfer to the substrate. The maximum temperature is around 50 to 60 °C.”

As for where vision systems and other photonic sensors may ultimately be headed, one clue could soon be flying overhead. John L. Junkins, an aerospace engineering professor at Texas A&M University in College Station, has been developing sensors and software that will enable an autonomous spacecraft to spot other objects at a distance of miles and to dock with them. At that point, the craft would work on its target; debris would be removed via a safe re-entry path. A salvageable satellite, on the other hand, might be repaired.


A prototype laser radar sensor under development for spacecraft operations could find its way into use by earthbound industrial robots, allowing them to see in 3-D with precision and clarity.


Junkins is working on a prototype laser radar sensor and expects to have it operational this year. It will use a combination of high speed, accurate and dense laser ranging as well as full-color, high-definition video imaging.

The laser radar hardware is being developed by Systems & Process Engineering Corp. of Austin, Texas. The cost of components in laser radar has fallen from tens of thousands of dollars to only a few hundred, bringing into sight the day the technology might show up on a soldier’s helmet, an industrial robot or elsewhere, said Bradley Sallee, vice president of sensor systems. Advances in photonics have been accompanied by improvements in computing, and the combination could soon pay off in down-to-earth applications.

“The computational power is increasing rapidly, and the sensors are getting smaller,” Sallee said. “We’re right on the cusp of it being practical.”

Published: October 2012
Glossary
gige
GigE, short for gigabit Ethernet, refers to a standard for high-speed Ethernet communication, capable of transmitting data at rates of up to 1 gigabit per second (Gbps), or 1000 megabits per second (Mbps). GigE is an extension of the ethernet family of networking technologies, which is widely used for local area network (LAN) communication in homes, businesses, and data centers. Key features and characteristics of gigabit Ethernet include: Speed: GigE offers significantly higher data...
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
3D visionAmericasApplied RoboticsBaumerBrad SalleecamerasCarel SwartchemicalsCO2 lasersConsumerdata matrix codedefenseDoug ErlemannF-35FeaturesGigEHenry LoosImagingindustrialJohn JunkinsKawasaki RoboticsLADARlaser radarmachine visionMax FalconePaR SystemsPoEPower over EthernetroboticsSensors & DetectorsSPECTexas A&MvisionLasers

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.