Multimodal person identification in a smart room
Multimodal person identification in a smart room
Citació
- Luque J, Morros R, Anguita J, Farrús M, Macho D, Marqués F, Martínez C, Vilaplana V, Hernando J. Multimodal person identification in a smart room. En: Buera L, Lleida E, Miguel A, Ortega A, editores. IV Jornadas en Tecnología del Habla; 2006 Nov. 8-10; Zaragoza (España). Zaragoza: Universidad de Zaragoza; 2006. p. 327-31.
Enllaç permanent
Descripció
Resum
In this paper we present a person identification system based on a combination of acoustic features and 2D face images. We address the modality integration issue on the example of a smart room environment. In order to improve the results of the individual modalities, the audio and video classifiers are integrated after a set of normalization and fusion techniques. First we introduce the monomodal acoustic and video identification approaches and then we present the use of combined input speech and face images for person identification. The various sensory modalities, speech and faces, are processed both individually and jointly. The result obtained in the CLEAR’06 Evaluation Campaign shows that the performance of the multimodal approach results in improved performance in the identification of the participants.Descripció
Comunicació presentada a: IV Jornadas en Tecnología del Habla, celebrat del 8 al 10 de novembre de 2006 a Saragossa.