Cross-modal prediction in speech depends on prior linguistic experience

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Sánchez García, Carolina, 1984-ca
  • dc.contributor.author Enns, James T.ca
  • dc.contributor.author Soto-Faraco, Salvador, 1970-ca
  • dc.date.accessioned 2015-11-16T10:47:06Z
  • dc.date.available 2015-11-16T10:47:06Z
  • dc.date.issued 2013ca
  • dc.description.abstract The sight of a speaker’s facial movements during the perception of a spoken message can benefit speech processing through online predictive mechanisms. Recent evidence suggests that these predictive mechanisms can operate across sensory modalities, that is, vision and audition. However, to date, behavioral and electrophysiological demonstrations of cross-modal prediction in speech have considered only the speaker’s native language. Here, we address a question of current debate, namely whether the level of representation involved in cross-modal prediction is phonological or pre-phonological. We do this by testing participants in an unfamiliar language. If cross-modal prediction is predominantly based on phonological representations tuned to the phonemic categories of the native language of the listener, then it should be more effective in the listener’s native language than in an unfamiliar one. We tested Spanish and English native speakers in an audiovisual matching paradigm that allowed us to evaluate visual-to-auditory prediction, using sentences in the participant’s native language and in an unfamiliar language. The benefits of cross-modal prediction were only seen in the native language, regardless of the particular language or participant’s linguistic background. This pattern of results implies that cross-modal visual-to-auditory prediction during speech processing makes strong use of phonological representations, rather than low-level spatiotemporal correlations across facial movements and sounds.
  • dc.format.mimetype application/pdfca
  • dc.identifier.citation Sánchez-García C, Enns JT, Soto-Faraco S. Cross-modal prediction in speech depends on prior linguistic experience. Exp Brain Res. 2013 Feb 06;225(4): 499-511. DOI 10.1007/s00221-012-3390-3ca
  • dc.identifier.doi http://dx.doi.org/10.1007/s00221-012-3390-3
  • dc.identifier.issn 0014-4819ca
  • dc.identifier.uri http://hdl.handle.net/10230/25097
  • dc.language.iso engca
  • dc.publisher Springerca
  • dc.relation.ispartof Experimental Brain Research. 2013 Feb 06;225(4): 499-511.
  • dc.rights © Springer (The original publication is available at www.springerlink.com)ca
  • dc.rights.accessRights info:eu-repo/semantics/openAccessca
  • dc.subject.keyword Audiovisual speech
  • dc.subject.keyword Speech perception
  • dc.subject.keyword Predictive coding
  • dc.subject.keyword Multisensory integration
  • dc.title Cross-modal prediction in speech depends on prior linguistic experienceca
  • dc.type info:eu-repo/semantics/articleca
  • dc.type.version info:eu-repo/semantics/acceptedVersionca