Gulordava, KristinaAina, LauraBoleda, Gemma2020-05-082020-05-082018Gulordava K, Aina L, Boleda G. How to represent a word and predict it, too: improving tied architectures for language modelling. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing; 2018 Oct 31 - Nov 4; Brussels, Belgium. Stroudsburg: Association for Computational Linguistics; 2018. p. 2936–41.http://hdl.handle.net/10230/44468Comunicació presentada a la Conference on Empirical Methods in Natural Language Processing, celebrada els dies 31 d'octubre a 4 de novembre de 2020 a Brussel·les, Bèlgica.Recent state-of-the-art neural language models share the representations of words given by the input and output mappings. We propose a simple modification to these architectures that decouples the hidden state from the word embedding prediction. Our architecture leads to comparable or better results compared to previous tied models and models without tying, with a much smaller number of parameters. We also extend our proposal to word2vec models, showing that tying is appropriate for general word prediction tasks.application/pdfeng© ACL, Creative Commons Attribution 4.0 LicenseHow to represent a word and predict it, too: improving tied architectures for language modellinginfo:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/openAccess