Music emotion recognition: toward new, robust standards in personalized and context-sensitive applications
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Gómez Cañón, Juan Sebastián
- dc.contributor.author Cano, Estefanía
- dc.contributor.author Eerola, Tuomas
- dc.contributor.author Herrera Boyer, Perfecto, 1964-
- dc.contributor.author Hu, Xiao
- dc.contributor.author Yang, Yi-Hsuan
- dc.contributor.author Gómez Gutiérrez, Emilia, 1975-
- dc.date.accessioned 2023-02-15T07:24:19Z
- dc.date.available 2023-02-15T07:24:19Z
- dc.date.issued 2021
- dc.description.abstract Emotion is one of the main reasons why people engage and interact with music [1] . Songs can express our inner feelings, produce goosebumps, bring us to tears, share an emotional state with a composer or performer, or trigger specific memories. Interest in a deeper understanding of the relationship between music and emotion has motivated researchers from various areas of knowledge for decades [2] , including computational researchers. Imagine an algorithm capable of predicting the emotions that a listener perceives in a musical piece, or one that dynamically generates music that adapts to the mood of a conversation in a film—a particularly fascinating and provocative idea. These algorithms typify music emotion recognition (MER), a computational task that attempts to automatically recognize either the emotional content in music or the emotions induced by music to the listener [3] . To do so, emotionally relevant features are extracted from music. The features are processed, evaluated, and then associated with certain emotions. MER is one of the most challenging high-level music description problems in music information retrieval (MIR), an interdisciplinary research field that focuses on the development of computational systems to help humans better understand music collections. MIR integrates concepts and methodologies from several disciplines, including music theory, music psychology, neuroscience, signal processing, and machine learning.
- dc.description.sponsorship Juan Gómez-Cañón is partially supported by the European Commission under the TROMPA project (H2020 770376).
- dc.format.mimetype application/pdf
- dc.identifier.citation Gómez-Cañón JS, Cano E, Eerola T, Herrera P, Hu X, Yang YH, Gómez E. Music emotion recognition: toward new, robust standards in personalized and context-sensitive applications. IEEE Signal Process Mag. 2021;38(6):106-14. DOI: 10.1109/MSP.2021.3106232
- dc.identifier.doi http://dx.doi.org/10.1109/MSP.2021.3106232
- dc.identifier.issn 1053-5888
- dc.identifier.uri http://hdl.handle.net/10230/55773
- dc.language.iso eng
- dc.publisher Institute of Electrical and Electronics Engineers (IEEE)
- dc.relation.ispartof IEEE Signal Processing Magazine. 2021;38(6):106-14.
- dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/770376
- dc.rights © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. http://dx.doi.org/10.1109/MSP.2021.3106232
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.subject.keyword Emotion recognition
- dc.subject.keyword Mood
- dc.subject.keyword Heuristic algorithms
- dc.subject.keyword Computational modeling
- dc.subject.keyword Music
- dc.subject.keyword Signal processing algorithms
- dc.subject.keyword Prediction algorithms
- dc.subject.keyword Information retrieval
- dc.title Music emotion recognition: toward new, robust standards in personalized and context-sensitive applications
- dc.type info:eu-repo/semantics/article
- dc.type.version info:eu-repo/semantics/acceptedVersion