Biau, Emmanuel, 1985-Morís Fernández, Luis, 1982-Holle, HenningÁvila, CésarSoto-Faraco, Salvador, 1970-2016-11-142016Biau E, Moris Fernández L, Holle H, Avila C, Soto-Faraco S. Hand gestures as visual prosody: BOLD responses to audio–visual alignment are modulated by the communicative nature of the stimuli. NeuroImage. 2016;132:129-137. DOI: 10.1016/j.neuroimage.2016.02.0181053-8119http://hdl.handle.net/10230/27496During public addresses, speakers accompany their discourse with spontaneous hand gestures (beats) that are tightly synchronized with the prosodic contour of the discourse. It has been proposed that speech and beat gestures originate from a common underlying linguistic process whereby both speech prosody and beats serve to emphasize relevant information. We hypothesized that breaking the consistency between beats and prosody by temporal desynchronization, would modulate activity of brain areas sensitive to speech–gesture integration. To this aim, we measured BOLD responses as participants watched a natural discourse where the speaker used beat gestures./nIn order to identify brain areas specifically involved in processing hand gestures with communicative intention, beat synchrony was evaluated against arbitrary visual cues bearing equivalent rhythmic and spatial properties as the gestures. Our results revealed that left MTG and IFG were specifically sensitive to speech synchronized with beats, compared to the arbitrary vision–speech pairing. Our results suggest that listeners confer beats a function of visual prosody, complementary to the prosodic structure of speech. We conclude that the emphasizing function of beat gestures in speech perception is instantiated through a specialized brain network sensitive to the communicative intent conveyed by a speaker with his/her hands.application/pdfeng© Elsevier http://dx.doi.org/10.1016/j.neuroimage.2016.02.018Hand gestures as visual prosody: BOLD responses to audio–visual alignment are modulated by the communicative nature of the stimuliinfo:eu-repo/semantics/articlehttp://dx.doi.org/10.1016/j.neuroimage.2016.02.018Speech perceptionGesturesAudiovisual speechMultisensory IntegrationMTGFMRIinfo:eu-repo/semantics/openAccess