dc.contributor.author |
Chandna, Pritish |
dc.contributor.author |
Cuesta, Helena |
dc.contributor.author |
Petermann, Darius |
dc.contributor.author |
Gómez Gutiérrez, Emilia, 1975- |
dc.date.accessioned |
2023-03-15T14:15:45Z |
dc.date.available |
2023-03-15T14:15:45Z |
dc.date.issued |
2022 |
dc.identifier.citation |
Chandna P, Cuesta H, Petermann, D, Gómez E. A deep-learning based framework for source separation, analysis, and synthesis of choral ensembles. Front Signal Process. 2022;2:808594. DOI: 10.3389/frsip.2022.808594 |
dc.identifier.issn |
2673-8198 |
dc.identifier.uri |
http://hdl.handle.net/10230/56243 |
dc.description.abstract |
Choral singing in the soprano, alto, tenor and bass (SATB) format is a widely practiced and studied art form with significant cultural importance. Despite the popularity of the choral setting, it has received little attention in the field of Music Information Retrieval. However, the recent publication of high-quality choral singing datasets as well as recent developments in deep learning based methodologies applied to the field of music and speech processing, have opened new avenues for research in this field. In this paper, we use some of the publicly available choral singing datasets to train and evaluate state-of-the-art source separation algorithms from the speech and music domains for the case of choral singing. Furthermore, we evaluate existing monophonic F0 estimators on the separated unison stems and propose an approximation of the perceived F0 of a unison signal. Additionally, we present a set of applications combining the proposed methodologies, including synthesizing a single singer voice from the unison, and transposing and remixing the separated stems into a synthetic multi-singer choral signal. We finally conduct a set of listening tests to perform a perceptual evaluation of the results we obtain with the proposed methodologies. |
dc.description.sponsorship |
This work is partially supported by the European Commission under the TROMPA project (H2020 770376), and the project Musical AI (PID 2019-111403GB-I00/AEI/10.13039/501100011033) funded by the Spanish Ministerio de Ciencia, Innovación y Universidades (MCIU) and the Agencia Estatal de Investigación (AEI). |
dc.format.mimetype |
application/pdf |
dc.language.iso |
eng |
dc.publisher |
Frontiers |
dc.relation.ispartof |
Frontiers in Signal Processing. 2022;2:808594. |
dc.relation.isreferencedby |
https://www.frontiersin.org/articles/10.3389/frsip.2022.808594/full#supplementary-material |
dc.relation.isreferencedby |
https://zenodo.org/record/1286485 |
dc.relation.isreferencedby |
https://zenodo.org/record/3897181 |
dc.relation.isreferencedby |
https://zenodo.org/record/5848989 |
dc.relation.isreferencedby |
https://zenodo.org/record/5878677 |
dc.rights |
© 2022 Chandna, Cuesta, Petermann and Gómez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
dc.rights.uri |
https://creativecommons.org/licenses/by/4.0/ |
dc.title |
A deep-learning based framework for source separation, analysis, and synthesis of choral ensembles |
dc.type |
info:eu-repo/semantics/article |
dc.identifier.doi |
http://dx.doi.org/10.3389/frsip.2022.808594 |
dc.subject.keyword |
audio signal processing |
dc.subject.keyword |
deep learning |
dc.subject.keyword |
choral singing |
dc.subject.keyword |
source separation |
dc.subject.keyword |
unison |
dc.subject.keyword |
singing synthesis |
dc.relation.projectID |
info:eu-repo/grantAgreement/EC/H2020/770376 |
dc.relation.projectID |
info:eu-repo/grantAgreement/ES/2PE/PID2019-111403GB-I00 |
dc.rights.accessRights |
info:eu-repo/semantics/openAccess |
dc.type.version |
info:eu-repo/semantics/publishedVersion |