Search
Menu
PowerPhotonic Ltd. - Bessel Beam Generator LB 6/24

3D Vision Helps Operators Handle Hazardous Waste

Facebook X LinkedIn Email
HANK HOGAN, CONTRIBUTING EDITOR [email protected]

The hazardous nature of radioactive materials requires that they be handled from a distance using a manipulator, a skill that can take an operator years to master. Work involving radioactive materials is growing now that many nuclear plants are being decommissioned or taken apart for disposal.

In the U.S., the Pilgrim Nuclear Power Station in Massachusetts is in the process of being decommissioned. Similar efforts are planned or underway at facilities around the world.

The robotic setup (top) and the haptic device (bottom). Courtesy of Naresh Marturi, Extreme Robotics Laboratory/ National Centre for Nuclear Robotics, University of Birmingham.
The robotic setup (top) and the haptic device (bottom). Courtesy of Naresh Marturi, Extreme Robotics Laboratory/ National Centre for Nuclear Robotics, University of Birmingham.

 
  The robotic setup (top) and the haptic device (bottom). Courtesy of Naresh Marturi, Extreme Robotics Laboratory/ National Centre for Nuclear Robotics, University of Birmingham.

But a particular issue bedevils the handling of the most hazardous waste.

“There is a very big problem with depth perception,” said Naresh Marturi, a senior research scientist in the Extreme Robotics Lab (ERL) at the University of Birmingham in England.

Marturi and ERL research engineer Maxime Adjigble are combining 3D vision with tactile feedback to overcome the depth perception issue. The technology could benefit the nuclear industry, environmental cleanup, and other operations involving materials that must be handled carefully.

Human guidance

For safety reasons, the nuclear industry keeps people in a material handling loop — a closed-loop system that minimizes exposure to potentially hazardous materials. As part of this system, the industry uses industrial robots that operate under human guidance. These human operators might use a joystick to move a grasping manipulator at the end of a robot arm while viewing the scene through thick glass or through a 2D camera and screen. The lack of 3D information leads to problems with depth perception when people try to steer a grasper into position.

“They often collide the manipulator with the object. And once the manipulator collides, you can’t really go in there to fix it,” Marturi said. “Sometimes it’s not even possible to service the robot. You have to replace it.”

In the experimental setup, 3D point cloud images of the objects are shown on a screen. A tactile haptic feedback device helps the operator guide the robot arm in correct movement. Courtesy of Naresh Marturi, Extreme Robotics Laboratory/National Centre for Nuclear Robotics, University of Birmingham.

 
  In the experimental setup, 3D point cloud images of the objects are shown on a screen. A tactile haptic feedback device helps the operator guide the robot arm in correct movement. Courtesy of Naresh Marturi, Extreme Robotics Laboratory/National Centre for Nuclear Robotics, University of Birmingham.


urtesy of Naresh Marturi, Extreme Robotics Laboratory/National Centre for Nuclear Robotics, University of Birmingham.


Optimax Systems, Inc. - Ultrafast Coatings 2024 MR
 
  Users see objects on a screen and can rotate them in 3D after processing 3D point clouds. Courtesy of Naresh Marturi, Extreme Robotics Laboratory/National Centre for Nuclear Robotics, University of Birmingham.

These are direct costs — fixing a manipulator or replacing a robot — and there are also indirect ones because operations shut down when a robot collides with an object. Training costs are also significant because of the time it takes operators to gain the expertise necessary to successfully use such a remote manipulation setup.

The researchers devised a solution to these challenges that helps operators guide a robotic arm on the right path. The team documented the project and its results in an IEEE conference paper titled “An assisted telemanipulation approach: combining autonomous grasp planning with haptic clues” (www.doi.org/10.1109/iros40897.2019.8968454).

In their work, the researchers added depth perception by using an Ensenso 3D camera from IDS. These cameras generate a 3D point cloud using two 1-MP CMOS sensors and structured visible or infrared light. The system projects a light pattern onto a surface to determine the 3D location of points on the surface by looking at distortions in the pattern.

The researchers chose the camera because of its performance and speed, according to Marturi. In a proof-of- principle demonstration, they mounted the camera on the end of a robot arm.

During operation, the arm moves around the object and generates 3D point clouds from various vantage points. Software then uses the data to produce grasping solutions, or locations where the manipulator can best grip an object. The operator then chooses which grasp point to use. After this point is chosen, the software calculates a trajectory for the arm to follow to move the manipulator into position and grab the item of interest.

Haptic feedback

Finally, the operator moves a joystick or other controller, and the arm responds. A key innovation is that the operator experiences haptic feedback. A deviation from the calculated path will result in the operator feeling a tug on the joystick, a force that Marturi compared to feeling as if the controller were connected to springs. A feeling of increasing resistance occurs as the robot moves farther from the computed path. This feedback helps to ensure that the grasping manipulator finds its target, while the robot arm is prevented from colliding with other objects.

After the successful demonstration, the researchers made hardware and software improvements and refinements. They installed multiple cameras to enable the generation of a 3D point cloud without having to move a camera-mounted arm around the object. And they switched from using CPUs for making point cloud computations to using GPUs. These innovations allowed them to reduce the time it took to gather the data and calculate the proper motion.

“Now we are combining multiple point clouds together,” Adjigble said. “So, basically the point cloud is instantaneous.”

The technology still needs further refinement. Marturi estimated that the current technology readiness level is Level 4 or 5. The next step is achieving Level 6 status via a prototype demonstration in a decommissioning project at a nuclear plant. Such an accomplishment may take time because of the nuclear industry’s focus on safety as its highest priority.

While the aim of the team’s project was to demonstrate nuclear materials handling, Marturi said the technology could also be used in many other applications that involve hazardous waste.

Published: May 2022
Vision in Action3D visionmanipulator3D point cloudsrobotEnsensoIDSUniversity of Birmingham

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.