Learning Control Theory and Foundations

Learning Control Theory and Foundations

Learning algorithms hold great promise for improving a robot’s performance whenever a-priori models are not sufficiently accurate. We have developed learning controllers of different complexity ranging from controllers that improve the execution of a specific task by iteratively updating the reference input, to task-independent schemes that update the underlying robot model whenever new data becomes available. However, all learning controllers have the following characteristics:

  • they combine a-priori model information with experimental data,
  • they make no major, a-priori assumptions about the unknown effects to be learned, and
  • they have been tested extensively on state-of-the-art robotic platforms.
Our  algorithms combine fundamental concepts from control theory (e.g., optimal filtering and model predictive control) and machine learning (e.g., Gaussian processes), and computational tools from optimization (e.g., convex problem solvers). The result are fast-converging, computationally efficient, and practical learning algorithms and first-of-its-kind robot demonstrations. We  demonstrated both (i) full 3D motion learning on quadrotor vehicles and (ii) outdoor learning experiments that increased the tracking accuracy and speed of a ground robot navigating based on vision only. See also our robot racing page.


Related Publications

Model Learning with Gaussian Processes

Safe and Robust Learning Control

Learning of Feed-Forward Corrections

Gradient-Based Learning

University of Toronto Institute for Aerospace Studies