How low can you go? Reducing frequency and time resolution in current CNN architectures for music auto-tagging
How low can you go? Reducing frequency and time resolution in current CNN architectures for music auto-tagging
Citació
- Ferraro A, Bogdanov D, Serra X, Ho Jeon J, Yoon J. How low can you go? Reducing frequency and time resolution in current CNN architectures for music auto-tagging. In: 28th European Signal Processing Conference (EUSIPCO 2020): proceedings; 2020 Jan 18-21; Amsterdam, Netherlands. [Piscataway]: IEEE; 2020. p. 131-5. DOI: 10.23919/Eusipco47968.2020.9287769
Enllaç permanent
Descripció
Resum
Automatic tagging of music is an important research topic in Music Information Retrieval and audio analysis algorithms proposed for this task have achieved improvements with advances in deep learning. In particular, many state-of-the-art systems use Convolutional Neural Networks and operate on mel-spectrogram representations of the audio. In this paper, we compare commonly used mel-spectrogram representations and evaluate model performances that can be achieved by reducing the input size in terms of both lesser amount of frequency bands and larger frame rates. We use the MagnaTagaTune dataset for comprehensive performance comparisons and then compare selected configurations on the larger Million Song Dataset. The results of this study can serve researchers and practitioners in their trade-off decision between accuracy of the models, data storage size and training and inference times.Descripció
Comunicació presentada a 28th European Signal Processing Conference (EUSIPCO 2020), celebrat del 18 al 21 de gener de 2020 a Amsterdam, Països Baixos.