What do entity-centric models learn? Insights from entity linking in multi-party dialogue
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Boleda, Gemma
- dc.contributor.author Aina, Laura
- dc.contributor.author Silberer, Carina
- dc.contributor.author Sorodoc, Ionut-Teodor
- dc.contributor.author Westera, Matthijs
- dc.date.accessioned 2019-10-16T07:36:44Z
- dc.date.available 2019-10-16T07:36:44Z
- dc.date.issued 2019
- dc.description Comunicació presentada a la Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019), celebrada els dies 2 a 7 de juny de 2019 a Minneapolis, Estats Units d'Amèrica.
- dc.description.abstract Humans use language to refer to entities in the external world. Motivated by this, in recent years several models that incorporate a bias towards learning entity representations have been proposed. Such entity-centric models have shown empirical success, but we still know little about why. In this paper we analyze the behavior of two recently proposed entity-centric models in a referential task, Entity Linking in Multi-party Dialogue (SemEval 2018 Task 4). We show that these models outperform the state of the art on this task, and that they do better on lower frequency entities than a counterpart model that is not entity-centric, with the same model size. We argue that making models entitycentric naturally fosters good architectural decisions. However, we also show that these models do not really build entity representations and that they make poor use of linguistic context. These negative results underscore the need for model analysis, to test whether the motivations for particular architectures are borne out in how models behave when deployed.
- dc.description.sponsorship This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 715154), and from the Spanish Ramón y Cajal programme (grant RYC-2015-18907).
- dc.format.mimetype application/pdf
- dc.identifier.citation Aina L, Silberer C, Sorodoc I, Westera M, Boleda G. What do entity-centric models learn? Insights from entity linking in multi-party dialogue. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2019 Jun 2-7; Minneapolis, United States of America. Stroudsburg (PA): ACL; 2019. p. 3772–83.
- dc.identifier.uri http://hdl.handle.net/10230/42450
- dc.language.iso eng
- dc.publisher ACL (Association for Computational Linguistics)
- dc.relation.ispartof Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2019 Jun 2-7; Minneapolis, United States of America. Stroudsburg (PA): ACL; 2019. p. 3772–83.
- dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/715154
- dc.rights © ACL, Creative Commons Attribution 4.0 License
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.rights.uri http://creativecommons.org/licenses/by/4.0/
- dc.subject.keyword Deep learning
- dc.subject.keyword Reference
- dc.subject.keyword Entity linking
- dc.subject.keyword Dialogue
- dc.subject.keyword Computational semantics
- dc.subject.keyword Computational linguistics
- dc.title What do entity-centric models learn? Insights from entity linking in multi-party dialogue
- dc.type info:eu-repo/semantics/conferenceObject
- dc.type.version info:eu-repo/semantics/publishedVersion