Machine Vision
John Mortimer
Often a problem for optical inspection systems, specular reflection is, in this case,
the key to system efficiency.
Thanks to LEDs and off-the-shelf CCD camera technology, robots can
now insert automotive windshields on the fly as a car moves along the production
line. The first such application in the world is Ford Fiesta and Mazda 121 assembly
lines at a Ford Motor Co. plant in Dagenham, UK. What makes this application unique
is not just that the glass is inserted as the line moves, but also that the specular
reflection from the vehicles’ shiny surfaces provides the positioning information
the robots need for accurate part placement (Figure 1).
Figure 1. LEDs combined with CCD cameras allow assembly robots at
a Ford plant in the UK to insert windshields while cars move along the line. Specular
reflection, even from surfaces painted glossy black, provides the feedback to guide
positioning robots.
Applications involving transparent materials,
such as glass, and polished metals often are a challenge to optical inspection technology,
partly because rounded features on any specular surface act as cylindrical mirrors,
producing reflections that can confuse data readings. Sensors on the Dagenham assembly
lines, however, take advantage of this specular reflection — even from glossy
black painted surfaces — to guide positioning robots.
The insertion application, developed
by Oxford Sensor Technology Ltd. in Abingdon, UK, and systems integrator Prophet
Control Systems Ltd. of Stanford-le-Hope, UK, relies on three robot arms, each of
which has four specular reflection sensor units mounted on its gripper frame. Each
unit comprises an LED and two coplanar CCD cameras (Figure 2). The centrally mounted
stack of LEDs provides illumination, while the linear CCD cameras, placed on either
side of the diodes, detect a series of narrow peaks that represent images of the
light source formed by each curved feature. Matching peaks from each CCD, combined
with triangulation, allows the calculation of the location of the surface relative
to the sensor.
Figure 2. During on-the-fly windshield insertion, linear CCD cameras,
one on each side of centrally mounted LEDs, detect a series of narrow peaks representing
images of the light source formed by each curved feature. Matching peaks from each
CCD, combined with triangulation, allows the calculation of the location of the
surface relative to the sensor.
Using this concept, the Dagenham line
can process 100 windshields an hour, yet requires only two operators, instead of
the six that previously were required. Capital costs also are reportedly lower than
on the old line.
One reason it has taken automakers
so long to automate windshield insertion in this manner is the high skill level
needed in the process. Get it wrong, and the result may be adverse wind noises and
high warranty claims for leakage.
On higher-volume vehicles, including
vans like the Ford Transit, most automakers use robots to insert the glass with
its polyurethane adhesive sealant into the windshield opening. In North America
alone, they account for 70 to 80 percent of windshield insertions. Manual glazing
with assists is typical in lines with production rates of 30 jobs or fewer per hour.
Until the Dagenham line, in every direct glazing installation relying on robots,
the vehicle body would come to rest at the glazing station before glass insertion.
On some lines, a conveyor must still
move the body to a station where it can be held steady during glazing. Once the
body is static at a known reference point, laser-based sensors mounted above the
vehicle provide the data that the robot requires to position the glass centrally
in the aperture. It also may be necessary for the robot to hold the glass in place
under pressure for some time. Ultimately, even the best of these direct glazing
systems can require rework of the product to ensure a snug fit.
Using specular reflection
This is not necessarily the case at Dagenham.
The on-the-fly system uses a light beam to trigger windshield insertion. When an
oncoming car body breaks the beam, the program begins. To gather data, the system
relies on the fact that the sheet steel forming the basis of the windshield aperture
includes an external radius or corner, an internal radius and an edge. Each feature
produces a specular reflection image. As one camera studies this scene, it can capture
the specular reflection from the changes in profile.
Meanwhile, the second camera produces
a similar image. Using these two images and the offset, it is possible to triangulate
the position of the sensors — and hence the distance between windshield and
the opening. Thus, the sensors allow automatic location of vehicle body features
in a manner that can guide robots to insert windshields and rear and side windows
while the car is moving on the conveyor.
The sensors on each robot gripper frame
have a high-speed serial link to a computer. Using data supplied by the track-mounted
encoder, the distance of the sensors from predefined features of the car body around
the windshield aperture are calculated, feeding real-time correction data back to
the robot. Each sensor has a standoff range of about 250 mm (the distance it must
be from the component being viewed). The sensor field of view, though, is 100 mm.
These specifications imply that the object under view must be accurately located.
In the case of the Fiesta bodies traveling down the conveyor track, this is achieved
with metal guides that bring the skillet conveyor carrying the body into position
for the start of the glass insertion cycle.
While the track is moving, an operator
places a windshield in a rack. The first robot picks up the windshield, passes it
to an automatic primer and sets it down on a jig. A second robot picks up the glass
and moves it in front of the dispenser, where a polyurethane bead is placed around
the perimeter. It is then set down and picked up by another robot that places it
in the opening.
Done in 39 seconds
The encoder on the conveyor produces data that,
with the information from the specular reflection sensors on the third robot’s
grippers, translate into correction data to drive the robot arm. The robot inserts
the glass as the car travels backward down the line; however, it could just as easily
insert glass into a car body moving toward it. Typical cycle time is one vehicle
every 36 seconds.
he basic reason that this system works
in such an application is the way the image forms. In essence, the external and
internal corners form small-radius convex and concave cylindrical mirrors, respectively.
From standard optics theory, such mirrors
have a focal length that is equal to half the radii of curvature. In the case of
the car body, the radii of curvature of the corners typically might be a few millimeters.
Where the standoff distance of the sensor (typically 200 to 250 mm) is large compared
with the radii of the corners, an image of the light source forms close to the focus
of the mirrors, located halfway between the center of the curvature and the surface
(Figure 3). These are the images of the sensor light source that the cameras detect.
When the system reads these data, small corrections are applied to reference position
measurements relative to the center of curvature.
Figure 3. In the car body, the radii of curvature of the corners of the steel frame may
typically be a few millimeters. Where the standoff distance of the sensor is large
compared with the radii of the corners, an image of the light source forms near
the focus of the mirrors, halfway between the center of the curvature and the surface.
The cameras detect these images of the sensor light source.
One benefit of this system over more
traditional optical methods is its insensitivity to the color of the surface. This
is important from the viewpoint of time savings and, therefore, economics. Other
sensing systems can take a number of seconds to take readings of the color of the
car under surveillance and feed data to the computing system.
The system is insensitive to color
because the positions of the peaks formed in the images by the specular reflections
depend only on the geometric form of the surface. The strength of these signals
will depend on the quality of the paintwork — its luster, for example —
but is largely independent of the color of the surface. This is in direct contrast
to conventional techniques that rely on some degree of lambertian scattering to
detect a signal.
Adaptability
Although the specular reflection hardware —
LEDs and off-the-shelf hardware — may not be that complex, extensive work
was necessary to develop the software to control and calibrate the system. As a
result, during calibration, the specular reflection sensor system can accommodate
unusual features — such as spot welds or sealant — that might occur
from time to time in the viewing area. An easy-teach facility also allows operators
to quickly set up the system for new features.
In addition, although the UK-based
installation uses robots from ABB UK Ltd. of Milton Keynes, UK, it is possible to
interface sensors with other makes of robot, including those from Comau, Kawasaki,
Kuka and Motoman. System bus speed limits each sensor’s viewing time, but
under normal conditions, the unit can accept 10 readings per second, which is more
than adequate to deal with installations of the type found at Dagenham.
For more complex environments, readings
can be ramped up to 100 measurements per second. Up to 15 individual sensors can
be linked to a single interface card, enabling an installation to cope with multiple
models passing down the line. Such might be the case if sedans, hatchbacks and station
wagons are traveling along the same conveyor line.
Meet the author
John Mortimer is a retired journalist and part-time
consultant to Oxford Sensor Technology Ltd. of Abingdon, UK.
LATEST NEWS