A diffusion-inspired training strategy for singing voice extraction in the waveform domain
A diffusion-inspired training strategy for singing voice extraction in the waveform domain
Enllaç permanent
Descripció
Resum
Notable progress in music source separation has been achieved using multi-branch networks that operate on both temporal and spectral domains. However, such networks tend to be complex and heavy-weighted. In this work, we tackle the task of singing voice extraction from polyphonic music signals in an end-to-end manner using an approach inspired by the training procedure of denoising diffusion models. We perform unconditional signal modelling to gradually convert an input mixture signal to the corresponding singing voice or accompaniment. We use fewer parameters than the state-of-the-art models while operating on the waveform domain, bypassing phase-related problems. More concisely, we train a non-causal WaveNet using a diffusion-inspired strategy improving the said network for singing voice extraction and obtaining performance comparable to the end-to-end state-of-the-art on MUSDB18. We further report results on a non-MUSDB-overlapping version of MedleyDB and the multi-track audio of the Saraga Carnatic dataset showing good generalization, and run perceptual tests of our approach. Code, models, and audio examples are made available.Descripció
This work has been accepted at the 23rd International Society for Music Information Retrieval Conference (ISMIR 2022), at Bengaluru, India. December 4-8, 2022.