When: 21.4.2021 at 15:30
Abstract: Robots today are typically confined to interact with rigid, opaque objects with known object models. However, the objects in our daily lives are often non-rigid, can be transparent or reflective, and are diverse in shape and appearance. One reason for the limitations of current methods is that computer vision and robot planning are often considered separate fields. I argue that, to enhance the capabilities of robots, we should jointly design perception and planning algorithms based on the robotics task to be performed. I will show how we can develop novel perception algorithms to assist with the tasks of manipulating cloth, manipulating novel objects, and grasping transparent and reflective objects. By thinking about the downstream task and jointly developing vision and planning algorithms, we can significantly improve our progress on difficult robots tasks.
You can see the seminar Here