Farrús, MireiaLuque, JordiMorros, R.Anguita, JanMacho, D.Marqués, MartaMartínez, C.Vilaplana, VerónicaHernando, Javier2017-05-102017-05-102006Luque J, Morros R, Anguita J, Farrús M, Macho D, Marqués F, Martínez C, Vilaplana V, Hernando J. Multimodal person identification in a smart room. En: Buera L, Lleida E, Miguel A, Ortega A, editores. IV Jornadas en Tecnología del Habla; 2006 Nov. 8-10; Zaragoza (España). Zaragoza: Universidad de Zaragoza; 2006. p. 327-31.http://hdl.handle.net/10230/32122Comunicació presentada a: IV Jornadas en Tecnología del Habla, celebrat del 8 al 10 de novembre de 2006 a Saragossa.In this paper we present a person identification system based on a combination of acoustic features and 2D face images. We address the modality integration issue on the example of a smart room environment. In order to improve the results of the individual modalities, the audio and video classifiers are integrated after a set of normalization and fusion techniques. First we introduce the monomodal acoustic and video identification approaches and then we present the use of combined input speech and face images for person identification. The various sensory modalities, speech and faces, are processed both individually and jointly. The result obtained in the CLEAR’06 Evaluation Campaign shows that the performance of the multimodal approach results in improved performance in the identification of the participants.application/pdfeng© Els autors. Aquest document està subjecte a una llicència Creative Commons.Multimodal person identification in a smart roominfo:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/openAccess