Evaluating off-the-shelf machine listening and natural language models for automated audio captioning

dc.contributor.authorWeck, Benno
dc.contributor.authorFavory, Xavier
dc.contributor.authorDrossos, Konstantinos
dc.contributor.authorSerra, Xavier
dc.date.accessioned2025-05-30T05:48:13Z
dc.date.available2025-05-30T05:48:13Z
dc.date.issued2021
dc.description.abstractAutomated audio captioning (AAC) is the task of automatically generating textual descriptions for general audio signals. A captioning system has to identify various information from the input signal and express it with natural language. Existing works mainly focus on investigating new methods and try to improve their performance measured on existing datasets. Having attracted attention only recently, very few works on AAC study the performance of existing pre-trained audio and natural language processing resources. In this paper, we evaluate the performance of off-the-shelf models with a Transformer-based captioning approach. We utilize the freely available Clotho dataset to compare four different pre-trained machine listening models, four word embedding models, and their combinations in many different settings. Our evaluation suggests that YAMNet combined with BERT embeddings produces the best captions. Moreover, in general, fine-tuning pre-trained word embeddings can lead to better performance. Finally, we show that sequences of audio embeddings can be processed using a Transformer encoder to produce higher-quality captions.
dc.description.sponsorshipK. Drossos has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 957337, project MARVEL.
dc.format.mimetypeapplication/pdf
dc.identifier.citationWeck B, Favory X, Drossos K, Serra X. Evaluating off-the-shelf machine listening and natural language models for automated audio captioning. In: Font F, Mesaros A, Ellis DPW, Fonseca E, Fuentes M, Elizalde B, editors. Proceedings of the 6th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2021); 2021 Nov 15-19; Online. Barcelona: Universitat Pompeu Fabra, Music Technology Group; 2021. p. 60-4.
dc.identifier.urihttp://hdl.handle.net/10230/70561
dc.language.isoeng
dc.publisherUniversitat Pompeu Fabra
dc.relation.ispartofFont F, Mesaros A, Ellis DPW, Fonseca E, Fuentes M, Elizalde B, editors. Proceedings of the 6th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2021); 2021 Nov 15-19; Online. Barcelona: Universitat Pompeu Fabra, Music Technology Group; 2021.
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/957337
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by/4.0/
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subject.keywordAudio captioning
dc.subject.keywordTransfer learning
dc.subject.keywordWord embeddings
dc.subject.keywordMachine listening
dc.subject.keywordTransformer
dc.titleEvaluating off-the-shelf machine listening and natural language models for automated audio captioning
dc.typeinfo:eu-repo/semantics/conferenceObject
dc.type.versioninfo:eu-repo/semantics/publishedVersion

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Weck_DCASE_Eval.pdf
Size:
451.28 KB
Format:
Adobe Portable Document Format

License

Rights