A multimodal annotation schema for non-verbal affective analysis in the health-care domain
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Sukno, Federico Mateoca
- dc.contributor.author Domínguez Bajo, Mónicaca
- dc.contributor.author Ruiz, Adriàca
- dc.contributor.author Schiller, Dominikca
- dc.contributor.author Lingenfelser, Florianca
- dc.contributor.author Pragst, Louisaca
- dc.contributor.author Kamateri, Elenica
- dc.contributor.author Vrochidis, Stefanosca
- dc.date.accessioned 2016-07-26T17:33:43Z
- dc.date.available 2016-07-26T17:33:43Z
- dc.date.issued 2016ca
- dc.description.abstract The development of conversational agents with human interaction capabilities requires advanced affective state recognition integrating non-verbal cues from the different modalities constituting what in human communication we perceive as an overall affective state. Each of the modalities is often handled by a different subsystem that conveys only a partial interpretation of the whole and, as such, is evaluated only in/nterms of its partial view. To tackle this shortcoming, we investigate the generation of a unified multimodal annotation schema of non-verbal cues from the perspective of an interdisciplinary group of experts. We aim at obtaining a common ground-truth with a unique representation using the Valence and Arousal space and a discrete non-linear scale of values. The proposed annotation schema is demonstrated on/na corpus in the health-care domain but is scalable to other purposes. Preliminary results on inter-rater variability show a positive correlation of consensus level with high (absolute) values of Valence and Arousal as well as with the number of annotators labeling a given video sequence.en
- dc.description.sponsorship This work is partly supported by the Spanish Ministry of Economy and Competitiveness under the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502) and is part of the Kristina project funded by the European UnionˇSs Horizon 2020 research and innovation programme under grant agreement No 645012.en
- dc.format.mimetype application/pdfca
- dc.identifier.citation Sukno FM, Domínguez M, Ruiz A, Schiller D, Lingenfelser F, Pragst L, Kamateri E, Vrochidis S. A multimodal annotation schema for non-verbal affective analysis in the health-care domain. In: Proceedings of the 1st International Workshop on Multimedia Analysis and Retrieval for Multimodal Interaction (MARMI 2016); 2016 Jun 6; New York, USA. New York: ACM, 2016. p. 9-14. DOI: 10.1145/2927006.2927008ca
- dc.identifier.doi http://dx.doi.org/10.1145/2927006.2927008
- dc.identifier.uri http://hdl.handle.net/10230/27207
- dc.language.iso engca
- dc.publisher ACM Association for Computer Machineryca
- dc.relation.ispartof Proceedings of the 1st International Workshop on Multimedia Analysis and Retrieval for Multimodal Interaction (MARMI 2016); 2016 Jun 6; New York, USA. New York: ACM, 2016. p. 9-14.en
- dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/645012
- dc.rights © ACM, 2016. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 1st International Workshop on Multimedia Analysis and Retrieval for Multimodal Interaction (MIRMI 2016). http://doi.acm.org/10.1145/2927006.2927008ca
- dc.rights.accessRights info:eu-repo/semantics/openAccessca
- dc.subject.keyword Valence-arousalen
- dc.subject.keyword Human-machine interactionen
- dc.subject.keyword Multimodal analysisen
- dc.subject.keyword Embodied conversational agentsen
- dc.title A multimodal annotation schema for non-verbal affective analysis in the health-care domainca
- dc.type info:eu-repo/semantics/conferenceObjectca
- dc.type.version info:eu-repo/semantics/acceptedVersionca