Welcome to the UPF Digital Repository

Efficient supervised training of audio transformers for music representation learning

Show simple item record

dc.contributor.author Alonso Jiménez, Pablo
dc.contributor.author Serra, Xavier
dc.contributor.author Bogdanov, Dmitry
dc.date.accessioned 2023-10-03T12:31:07Z
dc.date.available 2023-10-03T12:31:07Z
dc.date.issued 2023-10-03
dc.identifier.uri http://hdl.handle.net/10230/58023
dc.description This work has been accepted at the 24th International Society for Music Information Retrieval Conference (ISMIR 2023), at Milan, Italy. October 5-9, 2023.
dc.description.abstract In this work, we address music representation learning using convolution-free transformers. We build on top of existing spectrogram-based audio transformers such as AST and train our models on a supervised task using patchout training similar to PaSST. In contrast to previous works, we study how specific design decisions affect downstream music tagging tasks instead of focusing on the training task. We assess the impact of initializing the models with different pre-trained weights, using various input audio segment lengths, using learned representations from different blocks and tokens of the transformer for downstream tasks, and applying patchout at inference to speed up feature extraction. We find that 1) initializing the model from ImageNet or AudioSet weights and using longer input segments are beneficial both for the training and downstream tasks, 2) the best representations for the considered downstream tasks are located in the middle blocks of the transformer, and 3) using patchout at inference allows faster processing than our convolutional baselines while maintaining superior performance. The resulting models, MAEST, 1 are publicly available and obtain the best performance among open models in music tagging tasks.
dc.description.sponsorship This work has been supported by the Musical AI project - PID2019-111403GB-I00/AEI/10.13039/501100011033, funded by the Spanish Ministerio de Ciencia e Innovación and the Agencia Estatal de Investigación.
dc.format.mimetype application/pdf
dc.language.iso eng
dc.rights © P. Alonso-Jiménez, X. Serra, and D. Bogdanov. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: P. Alonso-Jiménez, X. Serra, and D. Bogdanov, “Efficient Supervised Training of Audio Transformers for Music Representation Learning”, in Proc. of the 24th Int. Society for Music Information Retrieval Conf., Milan, Italy, 2023.
dc.rights.uri https://creativecommons.org/licenses/by/4.0/
dc.title Efficient supervised training of audio transformers for music representation learning
dc.type info:eu-repo/semantics/preprint
dc.relation.projectID info:eu-repo/grantAgreement/ES/2PE/PID2019-111403GB-I00
dc.rights.accessRights info:eu-repo/semantics/openAccess
dc.type.version info:eu-repo/semantics/submittedVersion

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account

Statistics

In collaboration with Compliant to Partaking