Abstract:
For most people, when we want to obtain information about an object in our surroundings, we first look at it. Eye contact is the most natural way for most people to interact with their surroundings. Hence tracking users' eye gaze is a common way to understand where their attention is focused. Notwithstanding the advances in eye tracking technology, especially the improvements in video-based oculography, the most commonly used technique, many factors still affect the technology's performance, which limits its use in daily life. One major challenge is the need for a calibration procedure before eye tracking can be initiated and the need for re-calibration whenever the device shifts even slightly. Another challenge is dealing with lighting conditions and reflections that can affect the estimation accuracy drastically. In this talk, we present a framework for calibration-free mobile eye tracking technology, intended to identify the user's focus of attention in a corneal imaging system. The framework uses a headset that consists of three cameras, a scene camera and two eye cameras: an IR camera and an RGB camera. The IR camera is used to continuously and reliably track the pupil and the RGB camera is used to acquire corneal images of the same eye. Deep learning algorithms are trained to detect the pupil in IR and RGB images and to compute a per user 3D model of the eye in real time. Once the 3D model is built, the 3D gaze direction is computed starting from the eyeball center and passing through the pupil center to the outside world. This model can also be used to transform the pupil position detected in the IR image into its corresponding position in the RGB image and to detect the gaze direction in the corneal image and front camera. This technique circumvents the problem of pupil detection in RGB images, which is especially difficult and unreliable when the scene is reflected in the corneal images. In our approach, the auto-calibration process is transparent and unobtrusive. Users do not have to be instructed to look at specific objects to calibrate the eye tracker. They need only to act and gaze normally. The framework was evaluated in a user study in realistic settings and the results are promising. It achieved a very low 3D gaze error and very high accuracy in acquiring corneal images.
About the speaker:
Dr. Moayad Mokatren is a researcher and lecturer with expertise in Computer Vision, Data Science, and Human-Computer Interaction (HCI). For over a decade, Dr. Mokatren has bridged the gap between groundbreaking academic research and technological development in the high-tech industry. His research focuses on developing advanced algorithms for mobile Eye-Tracking, 3D Gaze Estimation, and Calibration-Free learning systems. In his role as a Senior Data Scientist, he has gained unique experience in complex Data Science applications based on real-world hospital data, developing Deep Learning models for advanced monitoring and communication solutions in Intensive Care Unit (ICU) environments. Dr. Mokatren has published his work in leading international venues and has been recognized for excellence in teaching and student mentorship.