dc.contributor.author |
Blaauw, Merlijn |
dc.contributor.author |
Bonada, Jordi, 1973- |
dc.date.accessioned |
2018-12-04T09:28:52Z |
dc.date.available |
2018-12-04T09:28:52Z |
dc.date.issued |
2017 |
dc.identifier.citation |
Blaauw M, Bonada J. A neural parametric singing synthesizer. In: Proceedings of the 18th Annual Conference of the International Speech Communication Association (INTERSPEECH 2017); 2017 Aug. 20-24; Stockholm, Sweden. [Baixas]: ISCA; 2017. p. 4001-5. DOI: 10.21437/Interspeech.2017-1420 |
dc.identifier.issn |
1990-9772 |
dc.identifier.uri |
http://hdl.handle.net/10230/35951 |
dc.description |
Comunicació i pòster presentats a l'Interspeech 2017 celebrat del 20 al 24 d'agost a Estocolm, Suècia. |
dc.description.abstract |
We present a new model for singing synthesis based on a modified version of the WaveNet architecture. Instead of modeling raw waveform, we model features produced by a parametric vocoder that separates the influence of pitch and timbre. This allows conveniently modifying pitch to match any target melody, facilitates training on more modest dataset sizes, and significantly reduces training and generation times. Our model makes frame-wise predictions using mixture density outputs rather than categorical outputs in order to reduce the required parameter count. As we found overfitting to be an issue with the relatively small datasets used in our experiments, we propose a method to regularize the model and make the autoregressive generation process more robust to prediction errors. Using a simple multi-stream architecture, harmonic, aperiodic and voiced/unvoiced components can all be predicted in a coherent manner. We compare our method to existing parametric statistical and state-of-the-art concatenative methods using quantitative metrics and a listening test. While naive implementations of the autoregressive generation algorithm tend to be inefficient, using a smart algorithm we can greatly speed up the process and obtain a system that’s competitive in both speed and quality. |
dc.description.sponsorship |
We gratefully acknowledge the support of NVIDIA Corporation
with the donation of the Titan X Pascal GPU used for this
research. We also thank Zya for providing the English datasets.
Voctro Labs provided the Spanish dataset and the implementation
of the fast generation algorithm. This work is partially
supported by the Spanish Ministry of Economy and Competitiveness
under the CASAS project (TIN2015-70816-R). |
dc.format.mimetype |
application/pdf |
dc.language.iso |
eng |
dc.publisher |
International Speech Communication Association (ISCA) |
dc.relation.ispartof |
Proceedings of the 18th Annual Conference of the International Speech Communication Association (INTERSPEECH 2017); 2017 Aug. 20-24; Stockholm, Sweden. [Baixas]: ISCA; 2017. |
dc.rights |
© 2017 ISCA |
dc.title |
A neural parametric singing synthesizer |
dc.type |
info:eu-repo/semantics/conferenceObject |
dc.identifier.doi |
http://dx.doi.org/10.21437/Interspeech.2017-1420 |
dc.relation.projectID |
info:eu-repo/grantAgreement/ES/1PE/TIN2015-70816-R |
dc.rights.accessRights |
info:eu-repo/semantics/openAccess |
dc.type.version |
info:eu-repo/semantics/publishedVersion |