Content based singing voice extraction from a musical mixture

Citació

  • Chandna P, Blaauw M, Bonada J, Gómez E. Content based singing voice extraction from a musical mixture. In: 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP); 2020 May 4-8; Barcelona, Spain. New Jersery: The Institute of Electrical and Electronics Engineers; 2020. p. 781-85. DOI: 10.1109/ICASSP40776.2020.9053024

Enllaç permanent

Descripció

  • Resum

    We present a deep learning based methodology for extracting the singing voice signal from a musical mixture based on the underlying linguistic content. Our model follows an encoder-decoder architecture and takes as input the magnitude component of the spectrogram of a musical mixture with vocals. The encoder part of the model is trained via knowledge distillation using a teacher network to learn a content embedding, which is decoded to generate the corresponding vocoder features. Using this methodology, we are able to extract the unprocessed raw vocal signal from the mixture even for a processed mixture dataset with singers not seen during training. While the nature of our system makes it incongruous with traditional objective evaluation metrics, we use subjective evaluation via listening tests to compare the methodology to state-of-the-art deep learning based source separation algorithms. We also provide sound examples and source code for reproducibility.
  • Descripció

    Comunicació presentada a: ICASSP 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, celebrat en línia del 4 al 8 de maig de 2020.
  • Mostra el registre complet