Welcome to the UPF Digital Repository

Tabula nearly rasa: probing the linguistic knowledge of character-level neural language models trained on unsegmented text

Show simple item record

dc.contributor.author Hahn, Michael
dc.contributor.author Baroni, Marco
dc.date.accessioned 2020-11-20T10:10:37Z
dc.date.available 2020-11-20T10:10:37Z
dc.date.issued 2019
dc.identifier.citation Hahn M, Baroni M. Tabula nearly rasa: probing the linguistic knowledge of character-level neural language models trained on unsegmented text. Trans Assoc Comput Linguist. 2019;7:467-84. DOI: 10.1162/tacl_a_00283
dc.identifier.issn 2307-387X
dc.identifier.uri http://hdl.handle.net/10230/45815
dc.description.abstract Recurrent neural networks (RNNs) have reached striking performance in many natural language processing tasks. This has renewed interest in whether these generic sequence processing devices are inducing genuine linguistic knowledge. Nearly all current analytical studies, however, initialize the RNNs with a vocabulary of known words, and feed them tokenized input during training. We present a multi-lingual study of the linguistic knowledge encoded in RNNs trained as character-level language models, on input data with word boundaries removed. These networks face a tougher and more cognitively realistic task, having to discover any useful linguistic unit from scratch based on input statistics. The results show that our “near tabula rasa” RNNs are mostly able to solve morphological, syntactic and semantic tasks that intuitively presuppose word-level knowledge, and indeed they learned, to some extent, to track word boundaries. Our study opens the door to speculations about the necessity of an explicit, rigid word lexicon in language learning and usage.
dc.format.mimetype application/pdf
dc.language.iso eng
dc.publisher MIT Press
dc.relation.ispartof Transactions of the Association for Computational Linguistics. 2019;7:467-84
dc.rights This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode
dc.rights.uri https://creativecommons.org/licenses/by/4.0/
dc.title Tabula nearly rasa: probing the linguistic knowledge of character-level neural language models trained on unsegmented text
dc.type info:eu-repo/semantics/article
dc.identifier.doi http://dx.doi.org/10.1162/tacl_a_00283
dc.rights.accessRights info:eu-repo/semantics/openAccess
dc.type.version info:eu-repo/semantics/publishedVersion

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account

Statistics

Compliant to Partaking