A diffusion-inspired training strategy for singing voice extraction in the waveform domain

Citació

  • Plaja-Roglans G, Miron M, Serra X. A diffusion-inspired training strategy for singing voice extraction in the waveform domain. In: Rao P, Murthy H, Srinivasamurthy A, Bittner R, Caro Repetto R, Goto M, Serra X, Miron M, editors. Proceedings of the 23nd International Society for Music Information Retrieval Conference (ISMIR 2022); 2022 Dec 4-8; Bengaluru, India. [Canada]: International Society for Music Information Retrieval; 2022. p. 685-93. DOI: 10.5281/zenodo.7316754

Enllaç permanent

Descripció

  • Resum

    Notable progress in music source separation has been achieved using multi-branch networks that operate on both temporal and spectral domains. However, such networks tend to be complex and heavy-weighted. In this work, we tackle the task of singing voice extraction from polyphonic music signals in an end-to-end manner using an approach inspired by the training and sampling process of denoising diffusion models. We perform unconditional signal modelling to gradually convert an input mixture signal to the corresponding singing voice or accompaniment. We use fewer parameters than the state-of-the-art models while operating on the waveform domain, bypassing the phase estimation problem. More concisely, we train a non-causal WaveNet using a diffusion-inspired strategy while improving the said network for singing voice extraction and obtaining performance comparable to the end-to-end stateof-the-art on MUSDB18. We further report results on a non-MUSDB-overlapping version of MedleyDB and the multi-track audio of Saraga Carnatic showing good generalization, and run perceptual tests of our approach. Code, models, and audio examples are made available.
  • Descripció

    Comunicació presentada a 23nd International Society for Music Information Retrieval Conference (ISMIR 2022), celebrat del 4 al 8 de desembre de 2022 a Bangalore, Índia.
  • Mostra el registre complet