Visual and Event-based Cameras for Robot Navigation

Ee878de2 e66d 4ed4 90b3 879791a6bc20.jpg 500 500

Starting Date: earliest start: (2017-02-01) latest end: (2018-02-01)

Organization: Vision for Robotics Lab

Involved Host(s): Alzugaray Ignacio

Abstract: This project aims to explore the benefits of integrating the output of a conventional frame-based camera (capturing visible light) with that of an event-based camera, such as the Dynamic Vision Sensor (DVS) in the context of robot navigation and Simultaneous Localization And Mapping (SLAM).

Description: Modern computer vision algorithms focus on analysing sequences of images captured from classic CMOS cameras. While providing high-resolution, the main drawback of this sensing modality is that the image sequences that are captured typically include information redundant to robot navigation, wasting precious resources (e.g. processing time, memory) and therefore limiting real-time processing. Inspired by the visual receptors existing in nature, scientists have developed silicon-retina chips, which transmit events asynchronously only at pixel-locations, where variations in luminance are detected, such as the DVS. Such sensors permit high temporal resolution (1 µs), but they have low spatial resolution and suffer from noise. Over the past years, robot navigation and specially SLAM have been successfully applied using frame-based cameras in the research community. Nonetheless, state of the art still suffers from issues such as motion blur during fast motions and high computational load. The so called “event-based cameras” are some of the most appealing visual sensors to overcome such shortcomings by being able to capture sparse and valuable visual information even under fast motions. On the other hand, this new sensing modality poses a far more challenging scenario in the development of new computer vision algorithms due to the processing of discrete and asynchronous events instead of intensity images. The goal of this project is to develop an integrated framework for visual SLAM using a single frame-based camera and a DVS. Employing both type of sensors in a stereo rig, we aim to first develop a calibration procedure between both cameras and then study how one sensing modality can assist the other. For instance, the DVS subsystem can highlight interesting high saliency regions in the frame-based camera or provide an estimated camera pose based on incoming events when the frame-based camera suffers from motion blur.

Work Packages: - WP1: Research into existing works on visual SLAM, DVS for robot navigation and camera calibration for stereo rigs. - WP2: Improvement of the current calibration module and integration in the the existing pipeline. WP3: Starting from WP2’s pipeline, integration of event data stream from DVS into an existing frame-based visual SLAM. - WP4: Experimentation and evaluation of the system’s performance under different challenging conditions (e.g. lighting, motion) - WP5: Final evaluation of the methods and report writing

Requirements: - Highly motivated - C/C++ programming skills - Background knowledge in mobile robotics, computer vision and/or 3D geometry desired - Previous experience with Computer Vision, SLAM, Linux or ROS would be beneficial

Contact Details: Interested student please contact Ignacio Alzugaray (

Camera Calibration DVS Sensor Fusion Event-based processing Real-time SLAM Computer Vision

Labels: Semester project Master Thesis CLS Student Project (MPG ETH CLS)
Topics: Mathematical Sciences Information, Computing and Communication Sciences Engineering and Technology

© 2017, Copyright Max-Planck-Gesellschaft - Imprint