How much pretraining data do language models need to learn syntax?

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Pérez-Mayos, Laura
  • dc.contributor.author Ballesteros, Miguel
  • dc.contributor.author Wanner, Leo
  • dc.date.accessioned 2023-02-23T07:10:56Z
  • dc.date.available 2023-02-23T07:10:56Z
  • dc.date.issued 2021
  • dc.description Comunicació presentada a 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), celebrat del 7 a l'11 de novembre de 2021 de manera virtual.
  • dc.description.abstract Transformers-based pretrained language models achieve outstanding results in many wellknown NLU benchmarks. However, while pretraining methods are very convenient, they are expensive in terms of time and resources. This calls for a study of the impact of pretraining data size on the knowledge of the models. We explore this impact on the syntactic capabilities of RoBERTa, using models trained on incremental sizes of raw text data. First, we use syntactic structural probes to determine whether models pretrained on more data encode a higher amount of syntactic information. Second, we perform a targeted syntactic evaluation to analyze the impact of pretraining data size on the syntactic generalization performance of the models. Third, we compare the performance of the different models on three downstream applications: part-of-speech tagging, dependency parsing and paraphrase identification. We complement our study with an analysis of the cost-benefit trade-off of training such models. Our experiments show that while models pretrained on more data encode more syntactic knowledge and perform better on downstream applications, they do not always offer a better performance across the different syntactic phenomena and come at a higher financial and environmental cost.
  • dc.description.sponsorship This work has been partially funded by the European Commission via its H2020 Research Program under the contract numbers 786731, 825079, 870930, and 952133. This work has been partially supported by the ICT PhD program of Universitat Pompeu Fabra through a travel grant.
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Pérez-Mayos L, Ballesteros M, Wanner L. How much pretraining data do language models need to learn syntax? In: Moens MF, Huang X, Specia L, Yih SW, editors. 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021): proceedings of the conference; 2021 Nov 7-11; online. Stroudsburg: Association for Computational Linguistics; 2021. p. 1571-82. DOI: 10.18653/v1/2021.emnlp-main.118
  • dc.identifier.doi http://dx.doi.org/10.18653/v1/2021.emnlp-main.118
  • dc.identifier.uri http://hdl.handle.net/10230/55885
  • dc.language.iso eng
  • dc.publisher ACL (Association for Computational Linguistics)
  • dc.relation.ispartof Moens MF, Huang X, Specia L, Yih SW, editors. 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021): proceedings of the conference; 2021 Nov 7-11; online. Stroudsburg: Association for Computational Linguistics; 2021. p. 1571-82.
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/786731
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/825079
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/870930
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/952133
  • dc.rights © ACL, Creative Commons Attribution 4.0 License
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.rights.uri https://creativecommons.org/licenses/by/4.0/
  • dc.subject.other Informàtica
  • dc.title How much pretraining data do language models need to learn syntax?
  • dc.type info:eu-repo/semantics/conferenceObject
  • dc.type.version info:eu-repo/semantics/publishedVersion