Part-of-speech and prosody-based approaches for robot speech and gesture synchronization

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Pérez Mayos, Laura
  • dc.contributor.author Farrús, Mireia
  • dc.contributor.author Adell, Jordi
  • dc.date.accessioned 2019-11-19T15:37:29Z
  • dc.date.issued 2019
  • dc.description.abstract Humanoid robots are already among us and they are beginning to assume more social and personal roles, like guiding and assisting people. Thus, they should interact in a human-friendly manner, using not only verbal cues but also synchronized non-verbal and para-verbal cues. However, available robots are not able to communicate in this multimodal way, being just able to perform predefined gesture sequences, hand- crafted to accompany specific utterances. In the current paper, we propose a model based on three different approaches to extend humanoid robots communication behaviour with upper body gestures synchronized with the speech for novel utterances, exploiting part-of-speech grammatical information, prosody cues, and a combination of both. User studies confirm that our methods are able to produce natural, appropriate and good timed gesture sequences synchronized with speech, using both beat and emblematic gestures.en
  • dc.description.sponsorship The second author has been funded by the Agencia Estatal de Investigación (AEI), Ministerio de Ciencia, Innovación y Universidades and the Fondo Social Europeo (FSE) under grant RYC-2015-17239 (AEI/FSE, UE). The authors would like to thank the anonymous reviewers that helped to improve this paper through their valuable comments.
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Pérez-Mayos L, Farrus M, Adell J. Part-of-speech and prosody-based approaches for robot speech and gesture synchronization. J Intell Robot Syst. 2019 Nov 16:1-11. DOI: 10.1007/s10846-019-01100-3
  • dc.identifier.doi http://dx.doi.org/10.1007/s10846-019-01100-3
  • dc.identifier.issn 0921-0296
  • dc.identifier.uri http://hdl.handle.net/10230/42897
  • dc.language.iso eng
  • dc.publisher Springer
  • dc.relation.ispartof Journal of intelligent & robotic systems. 2019 Nov 16:1-11
  • dc.rights © Springer The final publication is available at Springer via https://doi.org/10.1007/s10846-019-01100-3
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.subject.keyword Human-computer interactionen
  • dc.subject.keyword Multimodal interactionen
  • dc.subject.keyword Humanoid robotsen
  • dc.subject.keyword Prosodyen
  • dc.subject.keyword Speechen
  • dc.subject.keyword Gesture modellingen
  • dc.subject.keyword Arm gesture synthesisen
  • dc.subject.keyword Speech and gesture synchronizationen
  • dc.subject.keyword Text-to-gestureen
  • dc.title Part-of-speech and prosody-based approaches for robot speech and gesture synchronization
  • dc.type info:eu-repo/semantics/article
  • dc.type.version info:eu-repo/semantics/acceptedVersion