Multiple F0 estimation in vocal ensembles using convolutional neural networks

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Cuesta, Helena
  • dc.contributor.author McFee, Brian
  • dc.contributor.author Gómez Gutiérrez, Emilia, 1975-
  • dc.date.accessioned 2020-11-11T07:23:55Z
  • dc.date.available 2020-11-11T07:23:55Z
  • dc.date.issued 2020
  • dc.description Comunicació presentada a: International Society for Music Information Retrieval Conference celebrat de l'11 al 16 d'octubre de 2020 de manera virtual.
  • dc.description.abstract This paper addresses the extraction of multiple F0 values from polyphonic and a cappella vocal performances using convolutional neural networks (CNNs). We address the major challenges of ensemble singing, i.e., all melodic sources are vocals and singers sing in harmony. We build upon an existing architecture to produce a pitch salience function of the input signal, where the harmonic constantQ transform (HCQT) and its associated phase differentials are used as an input representation. The pitch salience function is subsequently thresholded to obtain a multiple F0 estimation output. For training, we build a dataset that comprises several multi-track datasets of vocal quartets with F0 annotations. This work proposes and evaluates a set of CNNs for this task in diverse scenarios and data configurations, including recordings with additional reverb. Our models outperform a state-of-the-art method intended for the same music genre when evaluated with an increased F0 resolution, as well as a general-purpose method for multi-F0 estimation. We conclude with a discussion on future research directions.en
  • dc.description.sponsorship The authors would like to thank Rodrigo Schramm and Emmanouil Benetos for sharing the BSQ and BC datasets for this research. Helena Cuesta is supported by the FI Predoctoral Grant from AGAUR (Generalitat de Catalunya). This work is partially supported by the European Commission under the TROMPA project (H2020 770376) and MARL-NYU (as part of a two-months research stay).
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Cuesta H, McFee B, Gómez E. Multiple F0 estimation in vocal ensembles using convolutional neural networks. In: Cumming J, Ha Lee J, McFee B, Schedl M, Devaney J, McKay C, Zagerle E, de Reuse T, editors. Proceedings of the 21st International Society for Music Information Retrieval Conference; 2020 Oct 11-16; Montréal, Canada. [Canada]: ISMIR; 2020. p. 302-9.
  • dc.identifier.uri http://hdl.handle.net/10230/45712
  • dc.language.iso eng
  • dc.publisher International Society for Music Information Retrieval (ISMIR)
  • dc.relation.ispartof Cumming J, Ha Lee J, McFee B, Schedl M, Devaney J, McKay C, Zagerle E, de Reuse T, editors. Proceedings of the 21st International Society for Music Information Retrieval Conference; 2020 Oct 11-16; Montréal, Canada. [Canada]: ISMIR; 2020. p. 302-9
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/770376
  • dc.rights © H. Cuesta, B. McFee, and E. Gómez. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: H. Cuesta, B. McFee, and E. Gómez, “Multiple F0 Estimation in Vocal Ensembles using Convolutional Neural Networks”, in Proc. of the 21st Int. Society for Music Information Retrieval Conf., Montréal, Canada, 2020.
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.rights.uri https://creativecommons.org/licenses/by/4.0/
  • dc.title Multiple F0 estimation in vocal ensembles using convolutional neural networksen
  • dc.type info:eu-repo/semantics/conferenceObject
  • dc.type.version info:eu-repo/semantics/publishedVersion