When: 21.7.2021 at 15:30
Where: zoom
Abstract: My approach to Generalizable Autonomy posits that interactive learning across families of tasks is essential for discovering efficient representation and inference mechanisms. Arguably, a cognitive concept or a dexterous skill should be reusable across task instances to avoid constant relearning. It is insufficient to learn to “open a door”, and then have to re-learn it for a new door, or even windows & cupboards. Thus, I focus on three key questions: (1) Representational biases for embodied reasoning, (2) Causal Inference in abstract sequential domains, and (3) Interactive Policy Learning under uncertainty. In this talk, I will demonstrate the need for structured biases in modern RL algorithms in the context of robotics. This will span state, actions, learning mechanisms, and network architectures. Secondly, we will talk about the discovery of latent causal structure in dynamics for planning. Finally, I will demonstrate how large-scale data generation combined with insights from structure learning can enable sample efficient algorithms for practical systems. In this talk, I will focus mainly on manipulation, but my work has been applied to surgical robotics and legged locomotion as well.