Fusion of valence and arousal annotations through dynamic subjective ordinal modelling

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Ruiz, Adriàca
  • dc.contributor.author Martinez, Oriolca
  • dc.contributor.author Binefa i Valls, Xavierca
  • dc.contributor.author Sukno, Federico Mateoca
  • dc.date.accessioned 2017-09-01T16:58:14Z
  • dc.date.available 2017-09-01T16:58:14Z
  • dc.date.issued 2017
  • dc.description Comunicació presentada a: FG 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition, celebrada del 30 de maig al 3 de juny de 2017 a Washington, Estats Units d'Amèrica.
  • dc.description.abstract An essential issue when training and validating computer vision systems for affect analysis is how to obtain reliable ground-truth labels from a pool of subjective annotations. In this paper, we address this problem when labels are given in an ordinal scale and annotated items are structured as temporal sequences. This problem is of special importance in affective computing, where collected data is typically formed by videos of human interactions annotated according to the Valence and Arousal (V-A) dimensions. Moreover, recent works have shown that inter-observer agreement of V-A annotations can be considerably improved if these are given in a discrete ordinal scale. In this context, we propose a novel framework which explicitly introduces ordinal constraints to model the subjective perception of annotators. We also incorporate dynamic information to take into account temporal correlations between ground-truth labels. In our experiments over synthetic and real data with V-A annotations, we show that the proposed method outperforms alternative approaches which do not take into account either the ordinal structure of labels or their temporal correlation.en
  • dc.description.sponsorship This work is partly supported by the Spanish Ministry of Economy and Competitiveness under the Ramon y Cajal fellowships and the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502), and the Kristina project funded by the European Union Horizon 2020 research and innovation programme under grant agreement No 645012. Adria Ruiz would also like to acknowledge Spanish Government to provide support under grant FPU13/01740.
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Ruiz A, Martinez O, Binefa X, Sukno FM. Fusion of valence and arousal annotations through dynamic subjective ordinal modelling. In: FG 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition; 2017 May 30–June 3; Washington, DC, USA. [place unknown]: IEEE, 2017. p. 331-8. DOI: 10.1109/FG.2017.48
  • dc.identifier.doi http://dx.doi.org/10.1109/FG.2017.48
  • dc.identifier.uri http://hdl.handle.net/10230/32727
  • dc.language.iso eng
  • dc.publisher Institute of Electrical and Electronics Engineers (IEEE)ca
  • dc.relation.ispartof FG 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition; 2017 May 30–June 3; Washington, DC, USA. [place unknown]: IEEE, 2017. p. 331-8.
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/645012
  • dc.rights © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The final published article can be found at http://ieeexplore.ieee.org/document/7961760
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.subject.keyword Observersen
  • dc.subject.keyword Labelingen
  • dc.subject.keyword Contexten
  • dc.subject.keyword Trainingen
  • dc.subject.keyword Affective computingen
  • dc.subject.keyword Videosen
  • dc.subject.keyword Computer visionen
  • dc.title Fusion of valence and arousal annotations through dynamic subjective ordinal modellingca
  • dc.type info:eu-repo/semantics/conferenceObject
  • dc.type.version info:eu-repo/semantics/acceptedVersion