Musical instrument recognition in user-generated videos using a multimodal convolutional neural network architecture
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Slizovskaia, Olga
- dc.contributor.author Gómez Gutiérrez, Emilia, 1975-
- dc.contributor.author Haro Ortega, Gloria
- dc.date.accessioned 2018-12-04T09:28:59Z
- dc.date.available 2018-12-04T09:28:59Z
- dc.date.issued 2017
- dc.description Comunicació presentada a la International Conference on Multimedia Retrieval celebrada del 6 al 9 de juny de 2017 a Bucarest, Romania.
- dc.description.abstract This paper presents a method for recognizing musical instruments in user-generated videos. Musical instrument recognition from music signals is a well-known task in the music information retrieval (MIR) field, where current approaches rely on the analysis of the good-quality audio material. This work addresses a real-world scenario with several research challenges, i.e. the analysis of user-generated videos that are varied in terms of recording conditions and quality and may contain multiple instruments sounding simultaneously and background noise. Our approach does not only focus on the analysis of audio information, but we exploit the multimodal information embedded in the audio and visual domains. In order to do so, we develop a Convolutional Neural Network (CNN) architecture which combines learned representations from both modalities at a late fusion stage. Our approach is trained and evaluated on two large-scale video datasets: YouTube-8M and FCVID. The proposed architectures demonstrate state-of-the-art results in audio and video object recognition, provide additional robustness to missing modalities, and remains computationally cheap to train.
- dc.description.sponsorship is work is partly supported by the Spanish Ministry of Economy and Competitiveness under the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502), the CASAS Spanish research project (TIN2015-70816-R), and project TIN2015-70410-C2- 1-R (MINECO/FEDER, UE). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X GPU used for this research.
- dc.format.mimetype application/pdf
- dc.identifier.citation Slizovskaia O, Gómez E, Haro G. Musical instrument recognition in user-generated videos using a multimodal convolutional neural network architecture. In: ICMR 2017. ACM International Conference on Multimedia Retrieval; 2017 Jun 6-9; Bucharest, Romania. New York (NY): ACM; 2017. p. 226-32. DOI: 10.1145/3078971.3079002
- dc.identifier.doi http://dx.doi.org/10.1145/3078971.3079002
- dc.identifier.isbn 978-1-4503-4701-3
- dc.identifier.uri http://hdl.handle.net/10230/35952
- dc.language.iso eng
- dc.publisher ACM Association for Computer Machinery
- dc.relation.ispartof ICMR 2017. ACM International Conference on Multimedia Retrieval; 2017 Jun 6-9; Bucharest, Romania. New York (NY): ACM; 2017.
- dc.relation.projectID info:eu-repo/grantAgreement/ES/1PE/TIN2015-70816-R
- dc.relation.projectID info:eu-repo/grantAgreement/ES/1PE/TIN2015-70410-C2-1-R
- dc.rights © 2017 Association for Computing Machinery
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.subject.keyword Multimodal musical instrument classification
- dc.subject.keyword Convolutional neural networks
- dc.subject.keyword Multimodal video analysis
- dc.subject.keyword Feature fusion
- dc.subject.keyword Multimedia information retrieval
- dc.title Musical instrument recognition in user-generated videos using a multimodal convolutional neural network architecture
- dc.type info:eu-repo/semantics/conferenceObject
- dc.type.version info:eu-repo/semantics/acceptedVersion