Ferraro, AndrésBogdanov, DmitrySerra, XavierJeon, Jay HoYoon, Jason2023-03-072023-03-072020Ferraro A, Bogdanov D, Serra X, Ho Jeon J, Yoon J. How low can you go? Reducing frequency and time resolution in current CNN architectures for music auto-tagging. In: 28th European Signal Processing Conference (EUSIPCO 2020): proceedings; 2020 Jan 18-21; Amsterdam, Netherlands. [Piscataway]: IEEE; 2020. p. 131-5. DOI: 10.23919/Eusipco47968.2020.92877692219-5491http://hdl.handle.net/10230/56074Comunicació presentada a 28th European Signal Processing Conference (EUSIPCO 2020), celebrat del 18 al 21 de gener de 2020 a Amsterdam, Països Baixos.Automatic tagging of music is an important research topic in Music Information Retrieval and audio analysis algorithms proposed for this task have achieved improvements with advances in deep learning. In particular, many state-of-the-art systems use Convolutional Neural Networks and operate on mel-spectrogram representations of the audio. In this paper, we compare commonly used mel-spectrogram representations and evaluate model performances that can be achieved by reducing the input size in terms of both lesser amount of frequency bands and larger frame rates. We use the MagnaTagaTune dataset for comprehensive performance comparisons and then compare selected configurations on the larger Million Song Dataset. The results of this study can serve researchers and practitioners in their trade-off decision between accuracy of the models, data storage size and training and inference times.application/pdfeng© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. http://dx.doi.org/10.23919/Eusipco47968.2020.9287769How low can you go? Reducing frequency and time resolution in current CNN architectures for music auto-tagginginfo:eu-repo/semantics/conferenceObjecthttp://dx.doi.org/10.23919/Eusipco47968.2020.9287769music auto-taggingaudio classificationconvolutional neural networksinfo:eu-repo/semantics/openAccess