A deep learning based analysis-synthesis framework for unison singing
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Chandna, Pritish
- dc.contributor.author Cuesta, Helena
- dc.contributor.author Gómez Gutiérrez, Emilia, 1975-
- dc.date.accessioned 2020-11-11T07:23:50Z
- dc.date.available 2020-11-11T07:23:50Z
- dc.date.issued 2020
- dc.description Comunicació presentada a: International Society for Music Information Retrieval Conference celebrat de l'11 al 16 d'octubre de 2020 de manera virtual.
- dc.description.abstract Unison singing is the name given to an ensemble of singers simultaneously singing the same melody and lyrics. While each individual singer in a unison sings the same principle melody, there are slight timing and pitch deviations between the singers, which, along with the ensemble of timbres, give the listener a perceived sense of "unison". In this paper, we present a study of unison singing in the context of choirs; utilising some recently proposed deep-learning based methodologies, we analyse the fundamental frequency (F0) distribution of the individual singers in recordings of unison mixtures. Based on the analysis, we propose a system for synthesising a unison signal from an a cappella input and a single voice prototype representative of a unison mixture. We use subjective listening test to evaluate perceptual factors of our proposed system for synthesis, including quality, adherence to the melody as well the degree of perceived unison.en
- dc.description.sponsorship The TITANX used for this research was donated by the NVIDIA Corporation. This work is partially supported by the Towards Richer Online Music Public-domain Archives (TROMPA H2020 770376) project. Helena Cuesta is supported by the FI Predoctoral Grant from AGAUR (Generalitat de Catalunya).
- dc.format.mimetype application/pdf
- dc.identifier.citation Chandna P, Cuesta H, Gómez E. A deep learning based analysis-synthesis framework for unison singing. In: Cumming J, Ha Lee J, McFee B, Schedl M, Devaney J, McKay C, Zagerle E, de Reuse T, editors. Proceedings of the 21st International Society for Music Information Retrieval Conference; 2020 Oct 11-16; Montréal, Canada. [Canada]: ISMIR; 2020. p. 598-604.
- dc.identifier.uri http://hdl.handle.net/10230/45711
- dc.language.iso eng
- dc.publisher International Society for Music Information Retrieval (ISMIR)
- dc.relation.ispartof Cumming J, Ha Lee J, McFee B, Schedl M, Devaney J, McKay C, Zagerle E, de Reuse T, editors. Proceedings of the 21st International Society for Music Information Retrieval Conference; 2020 Oct 11-16; Montréal, Canada. [Canada]: ISMIR; 2020. p. 598-604
- dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/770376
- dc.rights © P. Chandna, H. Cuesta and E. Gómez. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: P. Chandna, H. Cuesta and E. Gómez, “A Deep Learning Based Analysis-Synthesis Framework For Unison Singing”, in Proc. of the 21st Int. Society for Music Information Retrieval Conf., Montréal, Canada, 2020.
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.rights.uri https://creativecommons.org/licenses/by/4.0/
- dc.title A deep learning based analysis-synthesis framework for unison singingen
- dc.type info:eu-repo/semantics/conferenceObject
- dc.type.version info:eu-repo/semantics/publishedVersion