Searching for audiovisual correspondence in multiple speaker scenarios
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Alsius, Agnèsca
- dc.contributor.author Soto-Faraco, Salvador, 1970-ca
- dc.date.accessioned 2015-11-16T10:47:09Z
- dc.date.available 2015-11-16T10:47:09Z
- dc.date.issued 2011ca
- dc.description.abstract A critical question in multisensory processing is how the constant information flow that arrives to our different senses is organized in coherent representations. Some authors claim that pre-attentive detection of inter-sensory correlations supports crossmodal binding, whereas other findings indicate that attention plays a crucial role. We used visual and auditory search tasks for speaking faces to address the role of selective spatial attention in audiovisual binding. Search efficiency amongst faces for the match with a voice declined with the number of faces being monitored concurrently, consistent with an attentive search mechanism. In contrast, search amongst auditory speech streams for the match with a face was independent of the number of streams being monitored concurrently, as long as localization was not required. We suggest that the fundamental differences in the way in which auditory and visual information is encoded play a limiting role in crossmodal binding. Based on these unisensory limitations, we provide a unified explanation for several previous apparently contradictory findings.
- dc.description.sponsorship This work was supported by grants PSI2010-15426 and Consolider INGENIO CSD2007-00012 (MICINN), Generalitat de Catalunya (SRG2009-092), and European Research Council (StG-2010 263145).
- dc.format.mimetype application/pdfca
- dc.identifier.citation Alsius A, Soto-Faraco S. Searching for audiovisual correspondence in multiple speaker scenarios. Exp Brain Res. 2011 Sep;213(2-3): 175-83. DOI 10.1007/s00221-011-2624-0ca
- dc.identifier.doi http://dx.doi.org/10.1007/s00221-011-2624-0
- dc.identifier.issn 0014-4819ca
- dc.identifier.uri http://hdl.handle.net/10230/25098
- dc.language.iso engca
- dc.publisher Springerca
- dc.relation.ispartof Experimental Brain Research. 2011 Sep;213(2-3): 175-83.
- dc.relation.projectID info:eu-repo/grantAgreement/EC/FP7/263145ca
- dc.relation.projectID info:eu-repo/grantAgreement/ES/2PN/CSD2007-00012
- dc.rights © Springer (The original publication is available at www.springerlink.com)ca
- dc.rights.accessRights info:eu-repo/semantics/openAccessca
- dc.subject.keyword Multisensory integration
- dc.subject.keyword Audiovisual speech perception
- dc.subject.keyword Spatial attention
- dc.subject.keyword Visual search
- dc.subject.keyword Auditory search
- dc.title Searching for audiovisual correspondence in multiple speaker scenariosca
- dc.type info:eu-repo/semantics/articleca
- dc.type.version info:eu-repo/semantics/acceptedVersionca