Cross-modal prediction in speech perception

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Sánchez García, Carolina, 1984-ca
  • dc.contributor.author Alsius, Agnèsca
  • dc.contributor.author Enns, James T.ca
  • dc.contributor.author Soto-Faraco, Salvador, 1970-ca
  • dc.date.accessioned 2015-05-28T07:05:41Z
  • dc.date.available 2015-05-28T07:05:41Z
  • dc.date.issued 2011ca
  • dc.description.abstract Speech perception often benefits from vision of the speaker's lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. The leading single sensory context and the subsequent audiovisual target fragment could be continuous in either one modality only, both (context in one modality continues into both modalities in the target fragment) or neither modalities (i.e., discontinuous). The results showed quicker audiovisual matching responses when context was continuous with the target within either the visual or auditory channel (Experiment 1). Critically, prior visual context also provided an advantage when it was cross-modally continuous (with the auditory channel in the target), but auditory to visual cross-modal continuity resulted in no advantage (Experiment 2). This suggests that visual speech information can provide an on-line benefit for processing the upcoming auditory input through the use of predictive mechanisms. We hypothesize that this benefit is expressed at an early level of speech analysis.en
  • dc.description.sponsorship This research was supported by the Spanish Ministry of Science and Innovation (PSI2010-15426 and Consolider INGENIO CSD2007-00012), Comissionat per a Universitats i Recerca del DIUE-Generalitat de Catalunya (SRG2009-092 and PIV2009-00122), and the European Research Council (StG-2010 263145).
  • dc.format.mimetype application/pdfca
  • dc.identifier.citation Sánchez-García C, Alsius A, Enns JT, Soto-Faraco S. Cross-modal prediction in speech perception. PLoS One. 2011;6(10): e25198 DOI: 10.1371/journal.pone.0025198ca
  • dc.identifier.doi http://dx.doi.org/10.1371/journal.pone.0025198
  • dc.identifier.issn 1932-6203ca
  • dc.identifier.uri http://hdl.handle.net/10230/23674
  • dc.language.iso engca
  • dc.publisher Public Library of Science (PLoS)ca
  • dc.relation.ispartof PLoS One. 2011;6(10): e25198
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/FP7/263145ca
  • dc.relation.projectID info:eu-repo/grantAgreement/ES/2PN/CSD2007-00012
  • dc.rights © 2011 Sánchez-García et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.ca
  • dc.rights.accessRights info:eu-repo/semantics/openAccessca
  • dc.rights.uri http://creativecommons.org/licenses/by/4.0/
  • dc.title Cross-modal prediction in speech perceptionca
  • dc.type info:eu-repo/semantics/articleca
  • dc.type.version info:eu-repo/semantics/publishedVersionca