Sep 28, 2023
Research on learning to turn objects in the hand by vision and touch at the Department of Computer Science
Prof. Robert Calandra is co-author of the paper "General In-Hand Object Rotation with Vision and Touch" published at the Conference on Robot Learning (CoRL), 2023. Together with Haozhi Qi (UC Berkeley, Meta AI), Brent Yi (UC Berkeley), Sudharshan Suresh (Meta AI, CMU), Mike Lambeta (Meta AI), Yi Ma UC (Berkeley, and Jitendra Malik (UC Berkeley, Meta AI), their paper presents the "RotateIt" system, which enables fingertip rotation of objects along multiple axes using multimodal sensory input.
With their research, they aim to improve the manipulation capabilities of robots by using Artificial Intelligence (AI) to achieve human-level dexterity and dexterity of robotic hands.
Roberto Calandra: "The use of touch is essential for fine interactions. In our work, we show that it is possible to use sim2real to learn visuo-tactile strategies for general in-hand rotation in simulation. These policies can then be successfully used on physical robots to rotate objects that have never been seen before."
The researchers' experiments show that incorporating proprioceptive, visual, and tactile information yields the best performance. Roberto Calandra and his team at TU Dresden are actively pursuing this line of research. Recently, the first robots were equipped with robotic hands and tactile sensors here.