Robust facial alignment with internal denoising auto-encoder

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Aspandi, Decky
  • dc.contributor.author Martínez, Oriol
  • dc.contributor.author Sukno, Federico Mateo
  • dc.contributor.author Binefa i Valls, Xavier
  • dc.date.accessioned 2021-04-01T10:45:37Z
  • dc.date.available 2021-04-01T10:45:37Z
  • dc.date.issued 2019
  • dc.description Comunicació presentada a: 16th Conference on Computer and Robot Vision (CRV) celebrat del 29 al 31 de maig de 2019 a Kingston, Canadà.
  • dc.description.abstract The development of facial alignment models is growing rapidly thanks to the availability of large facial landmarked datasets and powerful deep learning models. However, important challenges still remain for facial alignment models to work on images under extreme conditions, such as severe occlusions or large variations in pose and illumination. Current attempts to overcome this limitation have mainly focused on building robust feature extractors with the assumption that the model will be able to discard the noise and select only the meaningful features. However, such an assumption ignores the importance of understanding the noise that characterizes unconstrained images, which has been shown to benefit computer vision models if used appropriately on the learning strategy. Thus, in this paper we investigate the introduction of specialized modules for noise detection and removal, in combination with our state-of-the-art facial alignment module and show that this leads to improved robustness both to synthesized noise and in-the-wild conditions. The proposed model is built by combining two major subnetworks: internal image denoiser (based on the Auto-Encoder architecture) and facial landmark localiser (based on the inception-resnet architecture). Our results on the 300-W and Menpo datasets show that our model can effectively handle different types of synthetic noise, which also leads to enhanced robustness in real-world unconstrained settings, reaching top state-of-the-art accuracy.en
  • dc.description.sponsorship This work is partly supported by the Spanish Ministry of Economy and Competitiveness under project grant TIN2017-90124-P, the Ramon y Cajal programme, the Maria de Maeztu Units of Excellence Programme (MDM-2015- 0502) and the donation bahi2018-19 to the CMTech at the UPF.
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Aspandi D, Martínez O, Sukno F, Binefa X. Robust facial alignment with internal denoising auto-encoder. In: 16th Conference on Computer and Robot Vision (CRV); 2019 May 29-31; Kingston, Canada. New Jersey: IEEE; 2019. p. 143-50. DOI: 10.1109/CRV.2019.00027
  • dc.identifier.doi http://dx.doi.org/10.1109/CRV.2019.00027
  • dc.identifier.uri http://hdl.handle.net/10230/47010
  • dc.language.iso eng
  • dc.publisher Institute of Electrical and Electronics Engineers (IEEE)
  • dc.relation.ispartof 16th Conference on Computer and Robot Vision (CRV); 2019 May 29-31; Kingston, Canada. New Jersey: IEEE; 2019. p. 143-50
  • dc.relation.projectID info:eu-repo/grantAgreement/ES/2PE/TIN2017-90124-P
  • dc.relation.projectID info:eu-repo/grantAgreement/ES/1PE/MDM-2015-0502
  • dc.rights © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. http://dx.doi.org/10.1109/CRV.2019.00027
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.subject.keyword Noise reductionen
  • dc.subject.keyword Feature extractionen
  • dc.subject.keyword Robustnessen
  • dc.subject.keyword Trainingen
  • dc.subject.keyword Computational modelingen
  • dc.subject.keyword Data modelsen
  • dc.subject.keyword Analytical modelsen
  • dc.title Robust facial alignment with internal denoising auto-encoderen
  • dc.type info:eu-repo/semantics/conferenceObject
  • dc.type.version info:eu-repo/semantics/acceptedVersion