A deep learning based analysis-synthesis framework for unison singing

dc.contributor.authorChandna, Pritish
dc.contributor.authorCuesta, Helena
dc.contributor.authorGómez Gutiérrez, Emilia, 1975-
dc.date.accessioned2020-11-11T07:23:50Z
dc.date.available2020-11-11T07:23:50Z
dc.date.issued2020
dc.descriptionComunicació presentada a: International Society for Music Information Retrieval Conference celebrat de l'11 al 16 d'octubre de 2020 de manera virtual.
dc.description.abstractUnison singing is the name given to an ensemble of singers simultaneously singing the same melody and lyrics. While each individual singer in a unison sings the same principle melody, there are slight timing and pitch deviations between the singers, which, along with the ensemble of timbres, give the listener a perceived sense of "unison". In this paper, we present a study of unison singing in the context of choirs; utilising some recently proposed deep-learning based methodologies, we analyse the fundamental frequency (F0) distribution of the individual singers in recordings of unison mixtures. Based on the analysis, we propose a system for synthesising a unison signal from an a cappella input and a single voice prototype representative of a unison mixture. We use subjective listening test to evaluate perceptual factors of our proposed system for synthesis, including quality, adherence to the melody as well the degree of perceived unison.en
dc.description.sponsorshipThe TITANX used for this research was donated by the NVIDIA Corporation. This work is partially supported by the Towards Richer Online Music Public-domain Archives (TROMPA H2020 770376) project. Helena Cuesta is supported by the FI Predoctoral Grant from AGAUR (Generalitat de Catalunya).
dc.format.mimetypeapplication/pdf
dc.identifier.citationChandna P, Cuesta H, Gómez E. A deep learning based analysis-synthesis framework for unison singing. In: Cumming J, Ha Lee J, McFee B, Schedl M, Devaney J, McKay C, Zagerle E, de Reuse T, editors. Proceedings of the 21st International Society for Music Information Retrieval Conference; 2020 Oct 11-16; Montréal, Canada. [Canada]: ISMIR; 2020. p. 598-604.
dc.identifier.urihttp://hdl.handle.net/10230/45711
dc.language.isoeng
dc.publisherInternational Society for Music Information Retrieval (ISMIR)
dc.relation.ispartofCumming J, Ha Lee J, McFee B, Schedl M, Devaney J, McKay C, Zagerle E, de Reuse T, editors. Proceedings of the 21st International Society for Music Information Retrieval Conference; 2020 Oct 11-16; Montréal, Canada. [Canada]: ISMIR; 2020. p. 598-604
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/770376
dc.rights© P. Chandna, H. Cuesta and E. Gómez. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: P. Chandna, H. Cuesta and E. Gómez, “A Deep Learning Based Analysis-Synthesis Framework For Unison Singing”, in Proc. of the 21st Int. Society for Music Information Retrieval Conf., Montréal, Canada, 2020.
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleA deep learning based analysis-synthesis framework for unison singingen
dc.typeinfo:eu-repo/semantics/conferenceObject
dc.type.versioninfo:eu-repo/semantics/publishedVersion

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
chandna_ismir_deep.pdf
Size:
1.2 MB
Format:
Adobe Portable Document Format

License

Rights