Pérez Mayos, LauraFarrús, MireiaAdell, Jordi2019-11-192019Pérez-Mayos L, Farrus M, Adell J. Part-of-speech and prosody-based approaches for robot speech and gesture synchronization. J Intell Robot Syst. 2019 Nov 16:1-11. DOI: 10.1007/s10846-019-01100-30921-0296http://hdl.handle.net/10230/42897Humanoid robots are already among us and they are beginning to assume more social and personal roles, like guiding and assisting people. Thus, they should interact in a human-friendly manner, using not only verbal cues but also synchronized non-verbal and para-verbal cues. However, available robots are not able to communicate in this multimodal way, being just able to perform predefined gesture sequences, hand- crafted to accompany specific utterances. In the current paper, we propose a model based on three different approaches to extend humanoid robots communication behaviour with upper body gestures synchronized with the speech for novel utterances, exploiting part-of-speech grammatical information, prosody cues, and a combination of both. User studies confirm that our methods are able to produce natural, appropriate and good timed gesture sequences synchronized with speech, using both beat and emblematic gestures.application/pdfeng© Springer The final publication is available at Springer via https://doi.org/10.1007/s10846-019-01100-3Part-of-speech and prosody-based approaches for robot speech and gesture synchronizationinfo:eu-repo/semantics/articlehttp://dx.doi.org/10.1007/s10846-019-01100-3Human-computer interactionMultimodal interactionHumanoid robotsProsodySpeechGesture modellingArm gesture synthesisSpeech and gesture synchronizationText-to-gestureinfo:eu-repo/semantics/openAccess