Head pose estimation based on 3-D facial landmarks localization and regression
Head pose estimation based on 3-D facial landmarks localization and regression
Citació
- Derkach D, Ruiz A, Sukno FM. Head pose estimation based on 3-D facial landmarks localization and regression. In: 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017); 2017 May 30-June 3; Washington DC, WA. Piscataway (NJ); IEEE; 2017. p. 820-7. DOI: 10.1109/FG.2017.104
Enllaç permanent
Descripció
Resum
In this paper we present a system that is able to estimate head pose using only depth information from consumer RGB-D cameras such as Kinect 2. In contrast to most approaches addressing this problem, we do not rely on tracking and produce pose estimation in terms of pitch, yaw and roll angles using single depth frames as input. Our system combines three different methods for pose estimation: two of them are based on state-of-the-art landmark detection and the third one is a dictionarybased approach that is able to work in especially challenging scans where landmarks or mesh correspondences are too difficult to obtain. We evaluated our system on the SASE database, which consists of ~30K frames from 50 subjects. We obtained average pose estimation errors between 5 and 8 degrees per angle, achieving the best performance in the FG2017 Head Pose Estimation Challenge. Full code of the developed system is available on-line.Descripció
Comunicació presentada a la 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), celebrada els dies 30 de maig a 3 de juny de 2017 a Washington DC, EUA.