Distilling an ensemble of greedy dependency parsers into one MST parser
Distilling an ensemble of greedy dependency parsers into one MST parser
Citació
- Kuncoro A, Ballesteros M, Kong L, Dyer C, Smith NA. Distilling an ensemble of greedy dependency parsers into one MST parser. In: Su J, Duh K, Carreras X, editors. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing; 2016 Nov 1-5; Austin, Texas. [Texas]: Association for Computational Linguistics; 2016. p. 1744-53. DOI: 10.18653/v1/d16-1180
Enllaç permanent
Descripció
Resum
We introduce two first-order graph-based dependency parsers achieving a new state of the art. The first is a consensus parser built from an ensemble of independently trained greedy LSTM transition-based parsers with different random initializations. We cast this approach as minimum Bayes risk decoding (under the Hamming cost) and argue that weaker consensus within the ensemble is a useful signal of difficulty or ambiguity. The second parser is a “distillation” of the ensemble into a single model. We train the distillation parser using a structured hinge loss objective with a novel cost that incorporates ensemble uncertainty estimates for each possible attachment, thereby avoiding the intractable crossentropy computations required by applying standard distillation objectives to problems with structured outputs. The first-order distillation parser matches or surpasses the state of the art on English, Chinese, and German.Descripció
Comunicació presentada a la 2016 Conference on Empirical Methods in Natural Language Processing, celebrada de l'1 al 5 de novembre de 2016 a Austin, Texas.