Leveraging pre-trained autoencoders for interpretable prototype learning of music audio

Citació

  • Alonso-Jiménez P, Pepino L, Batlle-Roca R, Zinemanas P, Bogdanov D, Serra X, Rocamora M. Leveraging pre-trained autoencoders for interpretable prototype learning of music audio. Paper presented at: ICASSP Workshop on Explainable AI for Speech and Audio (XAI-SA); 2024 Apr 15; Seoul, Korea.

Enllaç permanent

Descripció

  • Resum

    We present PECMAE an interpretable model for music audio classification based on prototype learning. Our model is based on a previous method, APNet, which jointly learns an autoencoder and a prototypical network. Instead, we propose to decouple both training processes. This enables us to leverage existing self-supervised autoencoders pre-trained on much larger data (EnCodecMAE), providing representations with better generalization. APNet allows prototypes’ reconstruction to waveforms for interpretability relying on the nearest training data samples. In contrast, we explore using a diffusion decoder that allows reconstruction without such dependency. We evaluate our method on datasets for music instrument classification (Medley-Solos-DB) and genre recognition (GTZAN and a larger in-house dataset), the latter being a more challenging task not addressed with prototypical networks before. We find that the prototype-based models preserve most of the performance achieved with the autoencoder embeddings, while the sonification of prototypes benefits understanding the behavior of the classifier
  • Descripció

    This work has been accepted at the ICASSP Workshop on Explainable AI for Speech and Audio (XAI-SA) at Seul, Korea. April 15, 2024
  • Mostra el registre complet