Deep Learning for EyeTracking

5987bae6 2421 419e 81b4 cc1c314fe572.png 500 500

Organization: Advanced Interactive Technologies

Involved Host(s): Park Seonwook , Admin AIT

Abstract: In this project, we aim to explore appearance-based gaze estimation using convolutional neural networks, with the aim of improving on state-of-the-art performance.

Description: Eyetracking has its roots in psychology, where a subject's point of gaze is tracked to determine interest and attention. Recent works have shifted eye tracking from more traditional and active methods to passive video-based methods. Such remote video methods allow for eye tracking to be applied in many form factors and Scenarios due to the prevalence of frontfacing cameras on everyday computing devices. The expansion beyond the laboratory environment leads to issues such as over/under-exposure, low image sensor quality, motion blur and complexity due to head movement. Such factors cannot easily be modeled. Thus, machine learning methods have been used increasingly in appearance-based gaze estimation tasks. Contrary to more conventional methods of eye tracking appearance-based gaze estimation methods attempt to regress directly on image data. Sugano et al. (2014) Synthesized large sets of eye images and estimated gaze using random forests. The work involved acquisition of images from human subjects from multiple perspectives, then synthetically rotating the heads to produce a large variation in head and eye gaze angles. Zhang et al. (2015) applied their model on particularly challenging data in real-world situations. They achieved this by acquiring images from laptops of 15 subjects, taken in a large variety of Conditions. The data exhibit variations in lighting conditions, head angle, and image quality. Zhang et al. employ a Convolutional neural network (based on LeNet) to address challenges arising from the new data. Latest works expand on that suggested by Zhang et al. (2015), often acquiring new data and always suggesting models and architectures which improve gaze estimation accuracy. 1) Sugano, Yusuke, et al. "Learning-by-Synthesis for appearance-based 3d gaze estimation." In CVPR. 2014. 2) Zhang, Xucong, et al. "Appearance-based gaze estimation in the wild." In CVPR. 2015.

Goal: In this project, we aim to explore appearance-based gaze estimation using convolutional neural networks, with the aim of improving on state-of-the-art performance. We will review literature in appearance-based eye tracking, in particular when using convolutional neural networks, and evaluate datasets which have been made available recently. We will then analyze models proposed so far and attempt to make valuable contributions in improving gaze estimation accuracy by modifying the existing model or suggesting new models. Therefore, good knowledge of machine learning is necessary. Work packages - Literature review on State-of-the-art appearance-based gaze estimation methods and datasets. - Implementation of gaze estimation using a convolutional neural network. - Broad evaluation of the System using established datasets.

Contact Details: **Required skills** - Solid programming skills (C++, Python, or similar). - Experience with training deep convolutional neural networks is a must. - High motivation, independence and eagerness to challenge the state-of-the-art. Internal Supervisors: Seonwook Park, Otmar Hilliges otmar.hilliges@

Eye tracking Appearance-based gaze estimation Deep convolutional neural networks Machine Learning

Labels: Master Thesis CLS Student Project (MPG ETH CLS)
Topics: Information, Computing and Communication Sciences

© 2017, Copyright Max-Planck-Gesellschaft - Imprint