Gulordava, KristinaAina, LauraBoleda, Gemma2018-10-302018-10-302018Gulordava K, Aina L, Boleda G. How to represent a word and predict it, too: improving tied architectures for language modelling. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing; 2018 Oct 31 - Nov 4; Brussels, Belgium. Stroudsburg: Association for Computational Linguistics; 2018. p. 2936–41.http://hdl.handle.net/10230/35675Comunicació presentada a la EMNLP 2018 Conference on Empirical Methods in Natural Language Processing, celebrada a Brussel·les (Bèlgica), del 31 d'octubre al 4 de novembre de 2018.Recent state-of-the-art neural language models share the representations of words given by the input and output mappings. We propose a simple modification to these architectures that decouples the hidden state from the word embedding prediction. Our architecture leads to comparable or better results compared to previous tied models and models without tying, with a much smaller number of parameters. We also extend our proposal to word2vec models, showing that tying is appropriate for general word prediction tasks.application/pdfeng© ACL, Creative Commons Attribution 4.0 LicenseHow to represent a word and predict it, too: improving tied architectures for language modellinginfo:eu-repo/semantics/conferenceObjectLanguage modelsWord embeddingsNeural networksTied representationsinfo:eu-repo/semantics/openAccess