Description:
Comunicació presentada a la 2016 Conference of the North American Chapter of the Association for Computational Linguistics, celebrada a San Diego (CA, EUA) els dies 12 a 17 de juny 2016.
Abstract:
We introduce recurrent neural network grammars,/nprobabilistic models of sentences with/nexplicit phrase structure. We explain efficient/ninference procedures that allow application to/nboth parsing and language modeling. Experiments/nshow that they provide better parsing in/nEnglish than any single previously published/nsupervised generative model and better language/nmodeling than state-of-the-art sequential/nRNNs in English and Chinese.