Enriched music representations with multiple cross-modal contrastive learning

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Ferraro, Andrés
  • dc.contributor.author Favory, Xavier
  • dc.contributor.author Drossos, Konstantinos
  • dc.contributor.author Kim, Yuntae
  • dc.contributor.author Bogdanov, Dmitry
  • dc.date.accessioned 2021-05-05T10:03:00Z
  • dc.date.issued 2021
  • dc.description.abstract Modeling various aspects that make a music piece unique is a challenging task, requiring the combination of multiple sources of information. Deep learning is commonly used to obtain representations using various sources of information, such as the audio, interactions between users and songs, or associated genre metadata. Recently, contrastive learning has led to representations that generalize better compared to traditional supervised methods. In this paper, we present a novel approach that combines multiple types of information related to music using cross-modal contrastive learning, allowing us to learn an audio feature from heterogeneous data simultaneously. We align the latent representations obtained from playlists-track interactions, genre metadata, and the tracks’ audio, by maximizing the agreement between these modality representations using a contrastive loss. We evaluate our approach in three tasks, namely, genre classification, playlist continuation and automatic tagging. We compare the performances with a baseline audio-based CNN trained to predict these modalities. We also study the importance of including multiple sources of information when training our embedding model. The results suggest that the proposed method outperforms the baseline in all the three downstream tasks and achieves comparable performance to the state-of-the-art.
  • dc.description.sponsorship Thanks to Tuomas Virtanen, Soohyeon Lee and Biho Kim for their valuable feedback. This work was partially supported by Kakao Corp. K. Drossos was partially supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 957337, project MARVEL
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Ferraro A, Favory X, Drossos K, Kim Y, Bogdanov D. Enriched music representations with multiple cross-modal contrastive learning. IEEE Signal Process Lett. 2021;28:733-7. DOI: 10.1109/LSP.2021.3071082
  • dc.identifier.doi http://dx.doi.org/10.1109/LSP.2021.3071082
  • dc.identifier.issn 1070-9908
  • dc.identifier.uri http://hdl.handle.net/10230/47323
  • dc.language.iso eng
  • dc.publisher Institute of Electrical and Electronics Engineers (IEEE)
  • dc.relation.ispartof IEEE Signal Processing Letters. 2021;28:733-7
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/957337
  • dc.rights © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. http://dx.doi.org/10.1109/LSP.2021.3071082
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.subject.keyword Music
  • dc.subject.keyword Task analysis
  • dc.subject.keyword Multiple signal classification
  • dc.subject.keyword Training
  • dc.subject.keyword Mood
  • dc.subject.keyword Metadata
  • dc.subject.keyword Recommender systems
  • dc.subject.keyword Acoustic signal processing
  • dc.subject.keyword Machine learning
  • dc.subject.keyword Music information retrieval
  • dc.title Enriched music representations with multiple cross-modal contrastive learning
  • dc.type info:eu-repo/semantics/article
  • dc.type.version info:eu-repo/semantics/acceptedVersion