Multimodal semantic learning from child-directed input

Citació

  • Lazaridou A, Chrupała G, Fernández R, Baroni M. Multimodal semantic learning from child-directed input. In: Knight K, Nenkova A, Rambow O, editors. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2016 Jun 12-17; San Diegio, California. Stroudsburg (PA): Association for Computational Linguistics; 2016. p. 387–92. DOI: 10.18653/v1/N16-1043

Enllaç permanent

Descripció

  • Resum

    Children learn the meaning of words by being exposed to perceptually rich situations (linguistic discourse, visual scenes, etc). Current computational learning models typically simulate these rich situations through impoverished symbolic approximations. In this work, we present a distributed word learning model that operates on child-directed speech paired with realistic visual scenes. The model integrates linguistic and extra-linguistic information (visual and social cues), handles referential uncertainty, and correctly learns to associate words with objects, even in cases of limited linguistic exposure.
  • Descripció

    Comunicació presentada a: 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies celebrat del 12 al 17 de juny de 2016 a San Diego, California.
  • Mostra el registre complet