Vision-Based Flying and Driving

We use vision to achieve robot localization and navigation without using external infrastructure. Our ground robot experiments localize based on 3D vision sensors (stereo cameras or Lidars) and make use of the so-called visual teach-and-repeat algorithm originally developed by Prof. Tim Barfoot’s group. We also work on vision-based flight of fixed-wing and quadrotor vehicles equipped with a camera. For example, one project explores methods of visual navigation for emergency return of UAVs in case of GPS or communications failure. We leverage the visual teach-and-repeat algorithm to allow autonomous self-teaching of routes with a monocular or stereo camera under GPS guidance, and autonomous re-following of the route using vision alone. To increase flexibility on the return path and not bound the vehicle to the self-taught path, we also investigate using other map sources such as satellite data.

 

Related Publications

University of Toronto Institute for Aerospace Studies