Welcome to the UPF Digital Repository

MusAV: a dataset of relative arousal-valence annotations for validation of audio models

Show simple item record

dc.contributor.author Bogdanov, Dmitry
dc.contributor.author Lizarraga Seijas, Xavier
dc.contributor.author Alonso-Jiménez, Pablo
dc.contributor.author Serra, Xavier
dc.date.accessioned 2023-04-11T06:42:59Z
dc.date.available 2023-04-11T06:42:59Z
dc.date.issued 2022
dc.identifier.citation Bogdanov D, Lizarraga-Seijas X, Alonso-Jiménez P, Serra X. MusAV: a dataset of relative arousal-valence annotations for validation of audio models. In: Rao P, Murthy H, Srinivasamurthy A, Bittner R, Caro Repetto R, Goto M, Serra X, Miron M, editors. Proceedings of the 23nd International Society for Music Information Retrieval Conference (ISMIR 2022); 2022 Dec 4-8; Bengaluru, India. [Canada]: International Society for Music Information Retrieval; 2022. p. 650-8. DOI: 10.5281/zenodo.7316746
dc.identifier.isbn 978-1-7327299-2-6
dc.identifier.uri http://hdl.handle.net/10230/56442
dc.description Comunicació presentada a 23nd International Society for Music Information Retrieval Conference (ISMIR 2022), celebrat del 4 al 8 de desembre de 2022 a Bangalore, Índia.
dc.description.abstract We present MusAV, a new public benchmark dataset for comparative validation of arousal and valence (AV) regression models for audio-based music emotion recognition. To gather the ground truth, we rely on relative judgments instead of absolute values to simplify the manual annotation process and improve its consistency. We build MusAV by gathering comparative annotations of arousal and valence on pairs of tracks, using track audio previews and metadata from the Spotify API. The resulting dataset contains 2,092 track previews covering 1,404 genres, with pairwise relative AV judgments by 20 annotators and various subsets of the ground truth based on different levels of annotation agreement. We demonstrate the use of the dataset in an example study evaluating nine models for AV regression that we train based on state-of-the-art audio embeddings and three existing datasets of absolute AV annotations. The results on MusAV offer a view of the performance of the models complementary to the metrics obtained during training and provide insights into the impact of the considered datasets and embeddings on the generalization abilities of the models.
dc.description.sponsorship This research was carried out under the project Musical AI - PID2019-111403GB-I00/AEI/10.13039/501100011033, funded by the Spanish Ministerio de Ciencia e Innovación and the Agencia Estatal de Investigación. We also thank Juan Sebastián Gómez Cañón for his suggestions and all participating annotators.
dc.format.mimetype application/pdf
dc.language.iso eng
dc.publisher International Society for Music Information Retrieval (ISMIR)
dc.relation.ispartof Rao P, Murthy H, Srinivasamurthy A, Bittner R, Caro Repetto R, Goto M, Serra X, Miron M, editors. Proceedings of the 23nd International Society for Music Information Retrieval Conference (ISMIR 2022); 2022 Dec 4-8; Bengaluru, India. [Canada]: International Society for Music Information Retrieval; 2022. p. 650-8.
dc.relation.isreferencedby https://github.com/MTG/musav-annotator
dc.rights © D. Bogdanov, X. Lizarraga-Seijas, P. Alonso-Jiménez, and X. Serra. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
dc.rights.uri http://creativecommons.org/licenses/by/4.0/
dc.subject.other Música -- Informàtica
dc.title MusAV: a dataset of relative arousal-valence annotations for validation of audio models
dc.type info:eu-repo/semantics/conferenceObject
dc.identifier.doi http://dx.doi.org/10.5281/zenodo.7316746
dc.relation.projectID info:eu-repo/grantAgreement/ES/2PE/PID2019-111403GB-I00
dc.rights.accessRights info:eu-repo/semantics/openAccess
dc.type.version info:eu-repo/semantics/publishedVersion

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account

Statistics

In collaboration with Compliant to Partaking