Welcome to the UPF Digital Repository

Cross-modal prediction in speech depends on prior linguistic experience

Show simple item record

dc.contributor.author Sánchez García, Carolina, 1984-
dc.contributor.author Enns, James T.
dc.contributor.author Soto-Faraco, Salvador, 1970-
dc.date.accessioned 2015-11-16T10:47:06Z
dc.date.available 2015-11-16T10:47:06Z
dc.date.issued 2013
dc.identifier.citation Sánchez-García C, Enns JT, Soto-Faraco S. Cross-modal prediction in speech depends on prior linguistic experience. Exp Brain Res. 2013 Feb 06;225(4): 499-511. DOI 10.1007/s00221-012-3390-3
dc.identifier.issn 0014-4819
dc.identifier.uri http://hdl.handle.net/10230/25097
dc.description.abstract The sight of a speaker’s facial movements during the perception of a spoken message can benefit speech processing through online predictive mechanisms. Recent evidence suggests that these predictive mechanisms can operate across sensory modalities, that is, vision and audition. However, to date, behavioral and electrophysiological demonstrations of cross-modal prediction in speech have considered only the speaker’s native language. Here, we address a question of current debate, namely whether the level of representation involved in cross-modal prediction is phonological or pre-phonological. We do this by testing participants in an unfamiliar language. If cross-modal prediction is predominantly based on phonological representations tuned to the phonemic categories of the native language of the listener, then it should be more effective in the listener’s native language than in an unfamiliar one. We tested Spanish and English native speakers in an audiovisual matching paradigm that allowed us to evaluate visual-to-auditory prediction, using sentences in the participant’s native language and in an unfamiliar language. The benefits of cross-modal prediction were only seen in the native language, regardless of the particular language or participant’s linguistic background. This pattern of results implies that cross-modal visual-to-auditory prediction during speech processing makes strong use of phonological representations, rather than low-level spatiotemporal correlations across facial movements and sounds.
dc.format.mimetype application/pdf
dc.language.iso eng
dc.publisher Springer
dc.relation.ispartof Experimental Brain Research. 2013 Feb 06;225(4): 499-511.
dc.rights © Springer (The original publication is available at www.springerlink.com)
dc.title Cross-modal prediction in speech depends on prior linguistic experience
dc.type info:eu-repo/semantics/article
dc.identifier.doi http://dx.doi.org/10.1007/s00221-012-3390-3
dc.subject.keyword Audiovisual speech
dc.subject.keyword Speech perception
dc.subject.keyword Predictive coding
dc.subject.keyword Multisensory integration
dc.rights.accessRights info:eu-repo/semantics/openAccess
dc.type.version info:eu-repo/semantics/acceptedVersion


This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account

Statistics

Compliant to Partaking