Getting your Trinity Audio player ready...
|
In the fast-paced world of industrial automation, precision is paramount. Pick-and-place machines, a staple in various industries, increasingly depend on digital technology to enhance efficiency and accuracy. However, traditional pick-and-place solutions often need to catch up, particularly in their ability to generalise across different tasks without sacrificing precision. Enter Simulation to Pick, Localise, and placE (SimPLE), an approach that leverages advanced digital technology to transform the capabilities of robotic pick-and-place systems.
They were developed by researchers from the Manipulation and Mechanisms Lab at MIT (MCube) under the direction of Alberto Rodriguez. SimPLE addresses a significant gap in the current landscape of industrial automation. Traditional pick-and-place machines are typically tailored to specific tasks, requiring extensive engineering and offering limited flexibility. This lack of adaptability is a significant drawback in industries where the variety of objects and functions is vast, necessitating a more versatile and precise solution.
“SimPLE solves this problem and provides a solution to pick-and-place that is flexible and still provides the needed precision,” said Maria Bauza Villalonga, PhD ’22, a senior research scientist specialising in robotics.
Maria highlighted that manufacturers often resort to highly customised solutions in many industrial settings that, while adequate for specific tasks, do not offer the adaptability required for broader applications. SimPLE’s innovation lies in its ability to apply the same hardware and software across different functions by using simulation to learn models that adapt to each task.
At the core of SimPLE’s success is its integration of advanced digital technologies, particularly in vision and tactile sensing. The system uses a dual-arm robot equipped with visuotactile sensors, which enable the robot to perceive its environment through both sight and touch. This dual-sensory input is crucial for achieving high positional accuracy in complex industrial tasks.
The SimPLE approach consists of three main components: task-aware grasping, visuotactile perception, and grasp planning. By leveraging these components, SimPLE can transform an unstructured arrangement of objects into a neatly organised configuration without prior encounters with the specific objects. The system achieves this by matching real-world and simulated observations through supervised learning. This allows SimPLE to estimate the most likely object poses and accomplish precise placements.
In a series of experiments, SimPLE demonstrated success, achieving over 90% successful placements for six objects and over 80% for eleven objects of varying shapes. This level of accuracy is a testament to the efficacy of combining tactile sensing with visual data. This synergy has long been theorised but has yet to be demonstrated effectively in complex robotic tasks.
“There’s an intuitive understanding in the robotics community that vision and touch are both useful, but until now, there haven’t been many systematic demonstrations of how it can be useful for complex robotics tasks,” says Antonia Delores Bronars, a doctoral student in mechanical engineering at MIT.
SimPLE’s development resulted from extensive collaboration, with contributions from multiple researchers and labs over several years. This collective effort underscores the importance of interdisciplinary teamwork in advancing the field of robotics.
As industries continue to embrace digital transformation, the advancements demonstrated by SimPLE offer a glimpse into the future of industrial automation. This breakthrough is poised to significantly impact a wide range of industries, from manufacturing to logistics, as the demand for flexible and accurate automation solutions continues to grow.