Comparatives, quantifiers, proportions: a multi-task model for the learning of quantities from vision

dc.contributor.authorSorodoc, Ionut-Teodor
dc.contributor.authorPezzelle, Sandro
dc.contributor.authorBernardi, Raffaella
dc.date.accessioned2019-10-16T07:36:47Z
dc.date.available2019-10-16T07:36:47Z
dc.date.issued2018
dc.descriptionComunicació presentada a la Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018), celebrada els dies 1 a 6 de juny de 2018 a Nova Orleans, Estats Units d'Amèrica.
dc.description.abstractThe present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, nonsymbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lowercomplexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.
dc.description.sponsorshipThis project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 715154).
dc.format.mimetypeapplication/pdf
dc.identifier.citationPezzelle S, Sorodoc IT, Bernardi R. Comparatives, quantifiers, proportions: a multi-task model for the learning of quantities from vision. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2018 Jun 1-6; New Orleans, United States of America. Stroudsburg (PA): ACL; 2018. p. 419-30.
dc.identifier.urihttp://hdl.handle.net/10230/42451
dc.language.isoeng
dc.publisherACL (Association for Computational Linguistics)
dc.relation.ispartofProceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2018 Jun 1-6; New Orleans, United States of America. Stroudsburg (PA): ACL; 2018. p. 419-30.
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/715154
dc.rights© ACL, Creative Commons Attribution 4.0 License
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subject.keywordDeep learning
dc.subject.keywordLanguage and vision
dc.subject.keywordQuantifiers
dc.subject.keywordComputational semantics
dc.subject.keywordComputational linguistics
dc.titleComparatives, quantifiers, proportions: a multi-task model for the learning of quantities from vision
dc.typeinfo:eu-repo/semantics/conferenceObject
dc.type.versioninfo:eu-repo/semantics/publishedVersion

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
sorodoc_naacl18_comparatives.pdf
Size:
1.56 MB
Format:
Adobe Portable Document Format

License

Rights