Modern computer vision algorithms focus on analysing sequences of images captured from classic CMOS cameras. While providing high-resolution, the main drawback of this sensing modality is that the image sequences that are captured typically include information redundant to robot navigation, wasting precious resources (e.g. processing time, memory) and therefore limiting real-time processing. Inspired by the visual receptors existing in nature, scientists have developed silicon-retina chips, which transmit events asynchronously only at pixel-locations, where variations in luminance are detected, such as the DVS. Such sensors permit high temporal resolution (1 µs), but they have low spatial resolution and suffer from noise.
Over the past years, robot navigation and specially SLAM have been successfully applied using frame-based cameras in the research community. Nonetheless, state of the art still suffers from issues such as motion blur during fast motions and high computational load. The so called “event-based cameras” are some of the most appealing visual sensors to overcome such shortcomings by being able to capture sparse and valuable visual information even under fast motions. On the other hand, this new sensing modality poses a far more challenging scenario in the development of new computer vision algorithms due to the processing of discrete and asynchronous events instead of intensity images.
The goal of this project is to develop an integrated framework for visual SLAM using a single frame-based camera and a DVS. Employing both type of sensors in a stereo rig, we aim to first develop a calibration procedure between both cameras and then study how one sensing modality can assist the other. For instance, the DVS subsystem can highlight interesting high saliency regions in the frame-based camera or provide an estimated camera pose based on incoming events when the frame-based camera suffers from motion blur.
- WP1: Research into existing works on visual SLAM, DVS for robot navigation and camera calibration for stereo rigs.
- WP2: Improvement of the current calibration module and integration in the the existing pipeline.
WP3: Starting from WP2’s pipeline, integration of event data stream from DVS into an existing frame-based visual SLAM.
- WP4: Experimentation and evaluation of the system’s performance under different challenging conditions (e.g. lighting, motion)
- WP5: Final evaluation of the methods and report writing
- Highly motivated
- C/C++ programming skills
- Background knowledge in mobile robotics, computer vision and/or 3D geometry desired
- Previous experience with Computer Vision, SLAM, Linux or ROS would be beneficial
Interested student please contact Ignacio Alzugaray (firstname.lastname@example.org)
CLS Student Project (MPG ETH CLS)
Information, Computing and Communication Sciences
Engineering and Technology