Evaluating off-the-shelf machine listening and natural language models for automated audio captioning

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Weck, Benno
  • dc.contributor.author Favory, Xavier
  • dc.contributor.author Drossos, Konstantinos
  • dc.contributor.author Serra, Xavier
  • dc.date.accessioned 2025-05-30T05:48:13Z
  • dc.date.available 2025-05-30T05:48:13Z
  • dc.date.issued 2021
  • dc.description.abstract Automated audio captioning (AAC) is the task of automatically generating textual descriptions for general audio signals. A captioning system has to identify various information from the input signal and express it with natural language. Existing works mainly focus on investigating new methods and try to improve their performance measured on existing datasets. Having attracted attention only recently, very few works on AAC study the performance of existing pre-trained audio and natural language processing resources. In this paper, we evaluate the performance of off-the-shelf models with a Transformer-based captioning approach. We utilize the freely available Clotho dataset to compare four different pre-trained machine listening models, four word embedding models, and their combinations in many different settings. Our evaluation suggests that YAMNet combined with BERT embeddings produces the best captions. Moreover, in general, fine-tuning pre-trained word embeddings can lead to better performance. Finally, we show that sequences of audio embeddings can be processed using a Transformer encoder to produce higher-quality captions.
  • dc.description.sponsorship K. Drossos has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 957337, project MARVEL.
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Weck B, Favory X, Drossos K, Serra X. Evaluating off-the-shelf machine listening and natural language models for automated audio captioning. In: Font F, Mesaros A, Ellis DPW, Fonseca E, Fuentes M, Elizalde B, editors. Proceedings of the 6th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2021); 2021 Nov 15-19; Online. Barcelona: Universitat Pompeu Fabra, Music Technology Group; 2021. p. 60-4.
  • dc.identifier.uri http://hdl.handle.net/10230/70561
  • dc.language.iso eng
  • dc.publisher Universitat Pompeu Fabra
  • dc.relation.ispartof Font F, Mesaros A, Ellis DPW, Fonseca E, Fuentes M, Elizalde B, editors. Proceedings of the 6th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2021); 2021 Nov 15-19; Online. Barcelona: Universitat Pompeu Fabra, Music Technology Group; 2021.
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/957337
  • dc.rights This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by/4.0/
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.rights.uri http://creativecommons.org/licenses/by/4.0/
  • dc.subject.keyword Audio captioning
  • dc.subject.keyword Transfer learning
  • dc.subject.keyword Word embeddings
  • dc.subject.keyword Machine listening
  • dc.subject.keyword Transformer
  • dc.title Evaluating off-the-shelf machine listening and natural language models for automated audio captioning
  • dc.type info:eu-repo/semantics/conferenceObject
  • dc.type.version info:eu-repo/semantics/publishedVersion