SEMINAR

Building Blocks of Generalizable Autonomy

Speaker

Animesh Garg

Working
University of Toronto
Timeline
Fri, Mar 19 2021 - 10:00 am (GMT + 7)
About Speaker

Animesh Garg is an CIFAR Chair Assistant Professor of Computer Science at the University of Toronto and a Faculty Member at the Vector Institute where he leads the Toronto People, AI, and Robotics (PAIR) research group. Animesh is affiliated with Mechanical and Industrial Engineering (courtesy) and UofT Robotics Institute. Animesh also spends time as a Senior Researcher at Nvidia Research in ML for Robotics. Prior to this, Animesh earned a Ph.D. from UC Berkeley, and was a postdoc at the Stanford AI Lab. His research focuses on machine learning algorithms for perception and control in robotics. His work aims to build Generalizable Autonomy in robotics which involves a confluence of representations and algorithms for reinforcement learning, control, and perception. His work has received multiple Best Paper Awards (ICRA, IROS, Hamlyn Symposium, NeurIPS Workshop, ICML Workshop) and has been covered in the press (New York Times, Nature, BBC).

Abstract

My approach to Generalizable Autonomy posits that interactive learning across families of tasks is essential for discovering efficient representation and inference mechanisms. Arguably, a cognitive concept or a dexterous skill should be reusable across task instances to avoid constant relearning. It is insufficient to learn to “open a door”, and then have to re-learn it for a new door, or even windows & cupboards. Thus, I focus on three key questions: (1) Representational biases for embodied reasoning, (2) Causal Inference in abstract sequential domains, and (3) Interactive Policy Learning under uncertainty.

In this talk, I will first through example lay bare the need for structured biases in modern RL algorithms in the context of robotics. This will span state, actions, learning mechanisms, and network architectures. Secondly, we will talk about the discovery of latent causal structure in dynamics for planning. Finally, I will demonstrate how large-scale data generation combined with insights from structure learning can enable sample efficient algorithms for practical systems. In this talk, I will focus mainly on manipulation, but my work has been applied to surgical robotics and legged locomotion as well.

Related seminars

Representation Learning with Graph Autoencoders and Applications to Music Recommendation
Fri, Jul 26 2024 - 10:00 am (GMT + 7)

Trieu Trinh

Google Deepmind

AlphaGeometry: Solving IMO Geometry without Human Demonstrations
Fri, Jul 5 2024 - 10:00 am (GMT + 7)

Tat-Jun (TJ) Chin

Adelaide University

Quantum Computing in Computer Vision: A Case Study in Robust Geometric Optimisation
Fri, Jun 7 2024 - 11:00 am (GMT + 7)

Fernando De la Torre

Carnegie Mellon University

Human Sensing for AR/VR
Wed, Apr 24 2024 - 07:00 am (GMT + 7)