High-Speed Vision-Based Pose Estimation for Micro Aerial Vehicles using DVS Sensor
by Admin
For a Micro Aerial vehicle (MAV) to work in indoor environments, it should be able to rely on onboard vision. This is made possible by so called “structure from motion” theory, which consists in tracking features and building a local 3D map of the environment as shown in the figure below. The drawback is that this algorithm requires a certain computational power for processing the images, extracting salient information, and computing the 3D pose of the MAV. This cannot be done on a very light-weight MAV (see picture below) with very limited computational resources.
This project aims, for the first time, to use a non-standard camera called Dynamic Vision Sensors (DVS: http://siliconretina.ini.uzh.ch/wiki/index.php). The DVS works like your eye. Instead of wastefully sending entire images at fixed frame rates, only the local pixel-level changes caused by movement in a scene are transmitted at the time they occur. The result is a stream of “address-events” at microsecond time resolution, equivalent to or better than conventional high-speed vision sensors running at thousands of frames per second.
Your goal will be to develop feature extraction and pose estimation with the DVS sensor using active blinking LED markers on the ground. The DVS sensor and software for event extraction and marker tracking will be provided.
This project in close collaboration between the Institute of Neuroinformatics (Univ. of Zurich and ETH Zurich, http://sensors.ini.uzh.ch/home.html) and the Robotics and Perception Lab, (Univ. of Zurich., http://sites.google.com/site/scarabotix/)