Lample, GuillaumeBallesteros, MiguelSubramanian, SandeepKawakami, KazuyaDyer, Chris2016-12-122016-12-122016Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C. Neural architectures for named entity recognition. In: Knight K, Lopez A, Mitchell M, editors. Human Language Technologies. 2016 Conference of the North American Chapter of the Association for Computational Linguistics; 2016 June 12-17; San Diego (CA, USA). [S.l.]: Association for Computational Linguistics (ACL); 2016. p. 260-270.http://hdl.handle.net/10230/27725Comunicació presentada a la 2016 Conference of the North American Chapter of the Association for Computational Linguistics, celebrada a San Diego (CA, EUA) els dies 12 a 17 de juny 2016.State-of-the-art named entity recognition systems/nrely heavily on hand-crafted features and/ndomain-specific knowledge in order to learn/neffectively from the small, supervised training/ncorpora that are available. In this paper, we/nintroduce two new neural architectures—one/nbased on bidirectional LSTMs and conditional/nrandom fields, and the other that constructs/nand labels segments using a transition-based/napproach inspired by shift-reduce parsers./nOur models rely on two sources of information/nabout words: character-based word/nrepresentations learned from the supervised/ncorpus and unsupervised word representations/nlearned from unannotated corpora. Our/nmodels obtain state-of-the-art performance in/nNER in four languages without resorting to/nany language-specific knowledge or resources/nsuch as gazetteers.application/pdfeng© ACL, Creative Commons Attribution-NonCommercial-ShareAlike 3.0 LicenseTractament del llenguatge natural (Informàtica)Lingüística computacionalNeural architectures for named entity recognitioninfo:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/openAccess