Pittsburgh-based Ace Wire Spring & Form Co. Inc. is a leading maker of custom springs and wire forms, including metal wire that is formed into hooks. The manufacturing process is highly complex, with 10 steps in the hook production process — beginning with bending the wire into 5-cm hooks, passing the hooks as bulk material into bins, and inserting them into a press that forms the ends. After the hook ends are pressed, they are brought to another station where an operator puts a bead on the flattened end of the hook, and the bead and the hook then get pressed together with a spring. The bead thus holds the spring in place and the spring gets tapered around. The final product is a swivel hook extension spring that can be used as a tensioning spring for fan belt pulleys. Untangling one hook from a cluttered pile relies on proprietary 3D vision algorithms, which combine classical geometric CAD-matching techniques with modern machine learning methods to achieve high detection accuracy across a wide range of object shapes, sizes, and materials. Courtesy of CapSen Robotics Inc. A key part of the production process involves moving a single hook out of a cluttered pile — a classic bin picking task that was previously performed manually at Ace. But unlike simpler bin picking applications, it quickly became clear that grasping a hook from a pile can be exceedingly complicated; the hooks are dumped into the bin and often get tangled. Picture a plate of sticky spaghetti, with the robot having to free a single noodle from the rest. Random bin picking is one of the ultimate challenges in robotics. It requires numerous complex steps, such as capturing the objects with a camera, analyzing them with an image processing system, and understanding their orientation. The position of the object and its gripping points, together with the optimal movement, must be communicated to the robot, which then sends the gripper arm on its way. Since several objects lie randomly on top of each other, a decision must be made about which object is the easiest to grasp — a difficult task considering that many of the parts are only partially visible. If the parts are entangled with each other, it is particularly challenging to grasp a single object, because there are many possible combinations of interlocked objects. Complex rotations of the object are often necessary, as well as putting the part down and picking it up again to grasp it in the right place with the right orientation. It is often not possible to force cluttered objects into oriented, graspable positions by shaking them or by using methods such as magnetic interference. Machine learning + 3D vision Untangling one hook from a jumble begins with vision, specifically proprietary 3D vision algorithms. These combine classical geometric CAD-matching techniques with modern machine learning methods to achieve high detection accuracy across a wide range of object shapes, sizes, and materials. Using innovative hardware, advanced algorithms, and engineering expertise, a robot now picks hooks out of the bins and places them into the press. A key factor in providing the robot with the necessary spatial intelligence to manage the process is CapSen Robotics Inc.’s complete solution, which includes 3D vision software, full motion planning, and control. Automating this key part of the process has paid dividends. “It’s so nice to be able to do what we were trained to do on the floor, instead of putting hooks into the press half of the day,” said Mike Valoski, a line supervisor at Ace. CapSen Robotics’ combination of 3D vision and motion planning software can turn any conventional industrial robot arm into a bin picking and machine-tending cell. Courtesy of CapSen Robotics Inc. The new robotic cell at Ace results in fewer mistakes and less downtime on this part of the production line. Working conditions are safer for employees, who can now concentrate on less mundane tasks. This leads to higher output and better performance and quality. The company is set to increase production and cut costs as a result. Plans are underway to automate other key parts of their manufacturing using the technology, which was supplied by CapSen. “We are already looking into other cells along the production line to install this innovative solution,” said Richard Froehlich, owner of Ace. Success in this first part of the auto- mation project took considerable effort because the task involves grasping a single hook from a chaotic pile. This processing is performed through intelligent use of a GPU, thereby achieving ~100× faster results than would be possible otherwise. The system also processes the next image and plans the path for the next object while the robot is still occupied with the picking and placing of the previous object. This makes the system even faster. But even the best algorithms are of little help without suitable hardware, especially the right gripper. The Precise PAVS6 was the collaborative robot of choice to manage Ace’s first robotic project. Standard, out-of-the-box grippers did not meet the requirements of the project. So, CapSen deployed an SMC parallel gripper motor and customized fingers that could pick up the hooks in two ways. “We had to try about 20 different finger designs to find the perfect solution and adapt it to the needs — using digital I/O to connect the computer to the inputs and outputs of the gripper and the sensors and magnets there — and ultimately to integrate them into the overall system,” CapSen CEO Jared Glover said. A stable grasp of the hooks is critical for picking and disentangling the parts. In operation, the gripper places the hook on a custom peg fixture. The robot then picks up the hook again with the customized fingers, placing it into a press in the proper orientation each time before dropping it into another bin. “That’s why we had to design the fingers to be able to pick up the hooks in two different ways,” Glover said. The precursor to CapSen’s efficient and advanced algorithms for processing geometric data was developed by Glover at MIT. Among other things, the software enabled a robot playing ping-pong to quickly and accurately detect the spin of the ping-pong balls, as well as learn how to initiate various ball spins and trajectories. But to achieve the accuracy of detection required for small objects such as the hooks at Ace, CapSen combined its state-of-the-art geometry algorithms with new techniques in machine learning. These proprietary machine learning models can be trained to recognize a new type of object using small data sets of only a few hundred images, versus the hundreds of thousands or even millions used for typical deep learning systems. “Modern machine learning — which is loosely inspired by parts of the human brain — is very good at training computers directly from data instead of with carefully handcrafted rules and formula,” Glover said. “However, computers have always been much better — more accurate and now trillions of times faster — at mathematical computation than humans are.”