dc.contributor.author |
Chandna, Pritish |
dc.contributor.author |
Blaauw, Merlijn |
dc.contributor.author |
Bonada, Jordi, 1973- |
dc.contributor.author |
Gómez Gutiérrez, Emilia, 1975- |
dc.date.accessioned |
2020-01-24T08:24:31Z |
dc.date.available |
2020-01-24T08:24:31Z |
dc.date.issued |
2019 |
dc.identifier.citation |
Chandna P, Blaauw M, Bonada J, Gomez E. A Vocoder based method for singing voice extraction. In: 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 2019 May 12-17; Brighton, United Kingdom. New Jersey: Institute of Electrical and Electronics Engineers; 2019. p. 990-4. DOI: 10.1109/ICASSP.2019.8683323 |
dc.identifier.isbn |
978-1-4799-8131-1 |
dc.identifier.issn |
2379-190X |
dc.identifier.uri |
http://hdl.handle.net/10230/43404 |
dc.description |
Comunicació presentada a: 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), celebrat del 12 al 17 de maig de 2019 a Brighton, Regne Unit. |
dc.description.abstract |
This paper presents a novel method for extracting the vocal track from a musical mixture. The musical mixture consists of a singing voice and a backing track which may comprise of various instruments. We use a convolutional network with skip and residual connections as well as dilated convolutions to estimate vocoder parameters, given the spectrogram of an input mixture. The estimated parameters are then used to synthesize the vocal track, without any interference from the backing track. We evaluate our system, through objective metrics pertinent to audio quality and interference from background sources, and via a comparative subjective evaluation. We use open-source source separation systems based on Non-negative Matrix Factorization (NMFs) and Deep Learning methods as benchmarks for our system and discuss future applications for this particular algorithm. |
dc.description.sponsorship |
The TITANX used for this research was donated by the NVIDIA Corporation. This work is partially supported by the Towards Richer Online Music Public-domain Archives (TROMPA) project. |
dc.format.mimetype |
application/pdf |
dc.language.iso |
eng |
dc.publisher |
Institute of Electrical and Electronics Engineers (IEEE) |
dc.relation.ispartof |
2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 2019 May 12-17; Brighton, United Kingdom. New Jersey: Institute of Electrical and Electronics Engineers; 2019. |
dc.rights |
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. http://dx.doi.org/10.1109/ICASSP.2019.8683323 |
dc.title |
A Vocoder based method for singing voice extraction |
dc.type |
info:eu-repo/semantics/conferenceObject |
dc.identifier.doi |
http://dx.doi.org/10.1109/ICASSP.2019.8683323 |
dc.subject.keyword |
Source separation |
dc.subject.keyword |
Deep learning |
dc.subject.keyword |
Convolutional neural networks |
dc.subject.keyword |
Vocoder |
dc.relation.projectID |
info:eu-repo/grantAgreement/EC/H2020/770376 |
dc.rights.accessRights |
info:eu-repo/semantics/openAccess |
dc.type.version |
info:eu-repo/semantics/acceptedVersion |