Probing for referential information in language models
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Sorodoc, Ionut-Teodor
- dc.contributor.author Gulordava, Kristina
- dc.contributor.author Boleda, Gemma
- dc.date.accessioned 2021-02-03T11:00:35Z
- dc.date.available 2021-02-03T11:00:35Z
- dc.date.issued 2020
- dc.description Comunicació presentada al 58th Annual Meeting of the Association for Computational Linguistics celebrat del 5 al 10 de juliol de 2020 de manera virtual.
- dc.description.abstract Language models keep track of complex information about the preceding context – including, e.g., syntactic relations in a sentence. We investigate whether they also capture information beneficial for resolving pronominal anaphora in English. We analyze two state of the art models with LSTM and Transformer architectures, via probe tasks and analysis on a coreference annotated corpus. The Transformer outperforms the LSTM in all analyses. Our results suggest that language models are more successful at learning grammatical constraints than they are at learning truly referential information, in the sense of capturing the fact that we use language to refer to entities in the world. However, we find traces of the latter aspect, too.en
- dc.description.sponsorship This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 715154), and from the Spanish Ramón y Cajal programme (grant RYC-2015-18907). We thankfully acknowledge the computer resources at CTE-POWER and the technical support provided by Barcelona Supercomputing Center (RES-IM2019-3-0006).
- dc.format.mimetype application/pdf
- dc.identifier.citation Sorodoc IT, Gulordava K, Boleda G. Probing for referential information in language models. In: Jurafsky D, Chai J, Schluter N, Tetreault J, editors. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics; 2020 Jul 5-10; Stroudsburg, USA. Stroudsburg (PA): ACL; 2020. p. 4177-89. DOI: 10.18653/v1/2020.acl-main.384
- dc.identifier.doi http://dx.doi.org/10.18653/v1/2020.acl-main.384
- dc.identifier.uri http://hdl.handle.net/10230/46317
- dc.language.iso eng
- dc.publisher ACL (Association for Computational Linguistics)
- dc.relation.ispartof Jurafsky D, Chai J, Schluter N, Tetreault J, editors. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics; 2020 Jul 5-10; Stroudsburg, USA. Stroudsburg (PA): ACL; 2020. p. 4177-89
- dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/715154
- dc.rights © ACL, Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/)
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.rights.uri https://creativecommons.org/licenses/by/4.0/
- dc.title Probing for referential information in language modelsen
- dc.type info:eu-repo/semantics/conferenceObject
- dc.type.version info:eu-repo/semantics/publishedVersion