Herbelot, AurélieBaroni, Marco2020-12-102020-12-102018Herbelot A, Baroni M. High-risk learning: acquiring new word vectors from tiny data. In: Palmer M, Hwa R, Riedel S, editors. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing; 2017 Sep 7-11; Copenhagen, Denmark. Stroudsburg (PA). Association for Computational Linguistics; 2017. p. 304–9. DOI: 10.18653/v1/D17-1030http://hdl.handle.net/10230/45966Comunicació presentada a: 2017 Conference on Empirical Methods in Natural Language Processing celebrat del 7 al 11 de setembre de 2017 a Copenhaguen, Dinamarca.Distributional semantics models are known to struggle with small data. It is generally accepted that in order to learn ‘a good vector’ for a word, a model must have sufficient examples of its usage. This contradicts the fact that humans can guess the meaning of a word from a few occurrences only. In this paper, we show that a neural language model such as Word2Vec only necessitates minor modifications to its standard architecture to learn new terms from tiny data, using background knowledge from a previously learnt semantic space. We test our model on word definitions and on a nonce task involving 2-6 sentences’ worth of context, showing a large increase in performance over state-of-the-art models on the definitional task.application/pdfeng© ACL, Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/)High-risk learning: acquiring new word vectors from tiny datainfo:eu-repo/semantics/conferenceObjecthttp://dx.doi.org/10.18653/v1/D17-1030info:eu-repo/semantics/openAccess