Benvinguts al Repositori Digital de la UPF

Modeling human annotation errors to design bias-aware systems for social stream processing

Mostra el registre parcial de l'element

dc.contributor.author Pandey, Rahul
dc.contributor.author Castillo, Carlos
dc.contributor.author Purohit, Hemant
dc.date.accessioned 2021-05-19T07:47:33Z
dc.date.available 2021-05-19T07:47:33Z
dc.date.issued 2019
dc.identifier.citation Pandey R, Castillo C, Purohit H. Modeling human annotation errors to design bias-aware systems for social stream processing. In: Spezzano F, Chen W, Xiao X, editors. ASONAM '19: 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining; 2019 Aug 27-30; Vancouver, Canada. New York: ACM; 2019. p. 374-77. DOI: 10.1145/3341161.3342931
dc.identifier.uri http://hdl.handle.net/10230/47599
dc.description Comunicació presentada al ASONAM '19: 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, celebrat del 27 al 30 d'agost de 2019 a Vancouver, Canadà.
dc.description.abstract High-quality human annotations are necessary to create effective machine learning systems for social media. Low-quality human annotations indirectly contribute to the creation of inaccurate or biased learning systems. We show that human annotation quality is dependent on the ordering of instances shown to annotators (referred as 'annotation schedule'), and can be improved by local changes in the instance ordering provided to the annotators, yielding a more accurate annotation of the data stream for efficient real-time social media analytics. We propose an error-mitigating active learning algorithm that is robust with respect to some cases of human errors when deciding an annotation schedule. We validate the human error model and evaluate the proposed algorithm against strong baselines by experimenting on classification tasks of relevant social media posts during crises. According to these experiments, considering the order in which data instances are presented to human annotators leads to both an increase in accuracy for machine learning and awareness toward some potential biases in human learning that may affect the automated classifier.
dc.description.sponsorship Purohit thanks U.S. NSF grant awards 1815459 & 1657379 and Castillo thanks La Caixa project (LCF/PR/PR16/11110009) for partial support.
dc.format.mimetype application/pdf
dc.language.iso eng
dc.publisher ACM Association for Computer Machinery
dc.relation.ispartof Spezzano F, Chen W, Xiao X, editors. ASONAM '19: 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining; 2019 Aug 27-30; Vancouver, Canada. New York: ACM; 2019. p. 374-77
dc.rights © 2019 Association for Computing Machinery
dc.title Modeling human annotation errors to design bias-aware systems for social stream processing
dc.type info:eu-repo/semantics/conferenceObject
dc.identifier.doi http://dx.doi.org/10.1145/3341161.3342931
dc.subject.keyword Human-centered computing
dc.subject.keyword Human bias
dc.subject.keyword Active learning
dc.subject.keyword Annotation schedule
dc.subject.keyword Human-AI collaboration
dc.rights.accessRights info:eu-repo/semantics/openAccess
dc.type.version info:eu-repo/semantics/acceptedVersion

Thumbnail

Aquest element apareix en la col·lecció o col·leccions següent(s)

Mostra el registre parcial de l'element

Cerca


Cerca avançada

Visualitza

El meu compte

Estadístiques

Amb col·laboració de Complim Participem