Mood classification using listening data
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Korzeniowski, Filip
- dc.contributor.author Nieto Caballero, Oriol
- dc.contributor.author McCallum, Matthew C.
- dc.contributor.author Won, Minz
- dc.contributor.author Oramas, Sergio
- dc.contributor.author Schmidt, Erik M.
- dc.date.accessioned 2020-11-11T08:43:35Z
- dc.date.available 2020-11-11T08:43:35Z
- dc.date.issued 2020
- dc.description Comunicació presentada a: International Society for Music Information Retrieval Conference celebrat de l'11 al 16 d'octubre de 2020 de manera virtual.
- dc.description.abstract The mood of a song is a highly relevant feature for exploration and recommendation in large collections of music. These collections tend to require automatic methods for predicting such moods. In this work, we show that listening-based features outperform content-based ones when classifying moods: embeddings obtained through matrix factorization of listening data appear to be more informative of a track mood than embeddings based on its audio content. To demonstrate this, we compile a subset of the Million Song Dataset, totalling 67k tracks, with expert annotations of 188 different moods collected from AllMusic. Our results on this novel dataset not only expose the limitations of current audio-based models, but also aim to foster further reproducible research on this timely topic.en
- dc.format.mimetype application/pdf
- dc.identifier.citation Korzeniowski F, Nieto O, McCallum MC, Won M, Oramas S, Schmidt EM. Mood classification using listening data. In: Cumming J, Ha Lee J, McFee B, Schedl M, Devaney J, McKay C, Zagerle E, de Reuse T, editors. Proceedings of the 21st International Society for Music Information Retrieval Conference; 2020 Oct 11-16; Montréal, Canada. [Canada]: ISMIR; 2020. p. 542-9.
- dc.identifier.uri http://hdl.handle.net/10230/45720
- dc.language.iso eng
- dc.publisher International Society for Music Information Retrieval (ISMIR)
- dc.relation.ispartof Cumming J, Ha Lee J, McFee B, Schedl M, Devaney J, McKay C, Zagerle E, de Reuse T, editors. Proceedings of the 21st International Society for Music Information Retrieval Conference; 2020 Oct 11-16; Montréal, Canada. [Canada]: ISMIR; 2020. p. 542-9
- dc.rights © Philippe Esling, Theis Bazin, Adrien Bitton, Tristan Carsault, Ninon Devis. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: Philippe Esling, Theis Bazin, Adrien Bitton, Tristan Carsault, Ninon Devis, “Ultra-light deep MIR by trimming lottery tickets”, in Proc. of the 21st Int. Society for Music Information Retrieval Conf., Montréal, Canada, 2020.
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.rights.uri https://creativecommons.org/licenses/by/4.0/
- dc.title Mood classification using listening dataen
- dc.type info:eu-repo/semantics/conferenceObject
- dc.type.version info:eu-repo/semantics/publishedVersion