Luque, JordiMorros, R.Garde, I.Anguita, JanFarrús, MireiaMacho, D.Marqués López, FernandoMartínez, C.Vilaplana, VerónicaHernando, Javier2017-09-042017-09-042007Luque J, Morros R, Garde I, Anguita J, Farrús M, Macho D, Marqués F, Martínez C, Vilaplana V, Hernando J. Audio, video and multimodal person identification in a smart room. In: Stiefelhagen R, Garofolo J, editors. Multimodal technologies for perception of humans: first International Evaluation Workshop on Classification of Events, Activities and Relationships, CLEAR 2006; 2006 Apr. 6-7; Southampton (UK). Germany: Springer; 2007. p. 258-69. (LNCS; no. 4122). DOI: 10.1007/978-3-540-69568-4_230302-9743http://hdl.handle.net/10230/32740Comunicació presentada a: The First International Evaluation Workshop on Classification of Events, Activities and Relationships, CLEAR 2006, celebrat a Southampton, Regne Unit, el 6 i 7 d'abril de 2006.In this paper, we address the modality integration issue on the example of a smart room environment aiming at enabling person identification by combining speech and 2D face images. First we introduce the monomodal audio and video identification techniques and then we present the use of combined input speech and face images for person identification. The various sensory modalities, speech and faces, are processed both individually and jointly. It’s shown that the multimodal approach results in improved performance in the identification of the participants.application/pdfeng© Springer The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-540-69568-4_23Audio, video and multimodal person identification in a smart roominfo:eu-repo/semantics/conferenceObjecthttp://dx.doi.org/10.1007/978-3-540-69568-4_23MultimodalitySpeaker recognitioninfo:eu-repo/semantics/openAccess