End-to-end learning for music audio tagging at scale

Citació

  • Pons J, Nieto O, Prockup M, Schmidt EM, Ehmann AF, Serra X. End-to-end learning for music audio tagging at scale. In: Gómez E, Hu X, Humphrey E, Benetos E, editors. Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018; 2018 Sep 23-27; Paris, France. p.637-44.

Enllaç permanent

Descripció

  • Resum

    The lack of data tends to limit the outcomes of deep learning research, particularly when dealing with end-to-end learning stacks processing raw data such as waveforms. In this study, 1.2M tracks annotated with musical labels are available to train our end-to-end models. This large amount of data allows us to unrestrictedly explore two different design paradigms for music auto-tagging: assumption-free models - using waveforms as input with very small convolutional filters; and models that rely on domain knowledge - log-mel spectrograms with a convolutional neural network designed to learn timbral and temporal features. Our work focuses on studying how these two types of deep architectures perform when datasets of variable size are available for training: the MagnaTagATune (25k songs), the Million Song Dataset (240k songs), and a private dataset of 1.2M songs. Our experiments suggest that music domain assumptions are relevant when not enough training data are available, thus showing how waveform-based models outperform spectrogram-based ones in large-scale data scenarios.
  • Descripció

    Comunicació presentada a: 19th International Society for Music Information Retrieval Conference (ISMIR 2018), celebrat del 23 al 27 de setembre de 2018 a París, França.
  • Mostra el registre complet