Comparatives, quantifiers, proportions: a multi-task model for the learning of quantities from vision
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Sorodoc, Ionut-Teodor
- dc.contributor.author Pezzelle, Sandro
- dc.contributor.author Bernardi, Raffaella
- dc.date.accessioned 2019-10-16T07:36:47Z
- dc.date.available 2019-10-16T07:36:47Z
- dc.date.issued 2018
- dc.description Comunicació presentada a la Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018), celebrada els dies 1 a 6 de juny de 2018 a Nova Orleans, Estats Units d'Amèrica.
- dc.description.abstract The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, nonsymbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lowercomplexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.
- dc.description.sponsorship This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 715154).
- dc.format.mimetype application/pdf
- dc.identifier.citation Pezzelle S, Sorodoc IT, Bernardi R. Comparatives, quantifiers, proportions: a multi-task model for the learning of quantities from vision. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2018 Jun 1-6; New Orleans, United States of America. Stroudsburg (PA): ACL; 2018. p. 419-30.
- dc.identifier.uri http://hdl.handle.net/10230/42451
- dc.language.iso eng
- dc.publisher ACL (Association for Computational Linguistics)
- dc.relation.ispartof Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2018 Jun 1-6; New Orleans, United States of America. Stroudsburg (PA): ACL; 2018. p. 419-30.
- dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/715154
- dc.rights © ACL, Creative Commons Attribution 4.0 License
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.rights.uri http://creativecommons.org/licenses/by/4.0/
- dc.subject.keyword Deep learning
- dc.subject.keyword Language and vision
- dc.subject.keyword Quantifiers
- dc.subject.keyword Computational semantics
- dc.subject.keyword Computational linguistics
- dc.title Comparatives, quantifiers, proportions: a multi-task model for the learning of quantities from vision
- dc.type info:eu-repo/semantics/conferenceObject
- dc.type.version info:eu-repo/semantics/publishedVersion