Poisoning attacks on algorithmic fairness

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Solans, David
  • dc.contributor.author Biggio, Battista
  • dc.contributor.author Castillo, Carlos
  • dc.date.accessioned 2021-05-20T08:35:52Z
  • dc.date.available 2021-05-20T08:35:52Z
  • dc.date.issued 2020
  • dc.description Comunicació presentada al ECML PKDD 2020: Machine Learning and Knowledge Discovery in Databases, celebrat del 14 al 18 de setembre de 2020 a Gant, Bèlgica.
  • dc.description.abstract Research in adversarial machine learning has shown how the performance of machine learning models can be seriously compromised by injecting even a small fraction of poisoning points into the training data. While the effects on model accuracy of such poisoning attacks have been widely studied, their potential effects on other model performance metrics remain to be evaluated. In this work, we introduce an optimization framework for poisoning attacks against algorithmic fairness, and develop a gradient-based poisoning attack aimed at introducing classification disparities among different groups in the data. We empirically show that our attack is effective not only in the white-box setting, in which the attacker has full access to the target model, but also in a more challenging black-box scenario in which the attacks are optimized against a substitute model and then transferred to the target model. We believe that our findings pave the way towards the definition of an entirely novel set of adversarial attacks targeting algorithmic fairness in different scenarios, and that investigating such vulnerabilities will help design more robust algorithms and countermeasures in the future.en
  • dc.description.sponsorship This research was supported by the European Commission through the ALOHAH2020 project. Also, we wish to acknowledge the usefulness of the Sec-ML library [17] for the execution of the experiments of this paper. C. Castillo thanks La Caixa project LCF/PR/PR16/11110009 for partial support. B. Biggio acknowledges that this work has been partly funded by BMK, BMDW, and the Province of Upper Austria in the frame of the COMET Programme managed by FFG in the COMET Module S3AI.
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Solans D, Biggio B, Castillo C. Poisoning attacks on algorithmic fairness. In: Hutter F, Kersting K, Lijffijt J, Valera I, editors. ECML PKDD 2020: Machine Learning and Knowledge Discovery in Databases; 2020 Sep 14-18; Ghent, Belgium. Cham: Springer; 2020. p. 162-77. (LNCS; no. 12457). DOI: 10.1007/978-3-030-67658-2_10
  • dc.identifier.doi http://dx.doi.org/10.1007/978-3-030-67658-2_10
  • dc.identifier.uri http://hdl.handle.net/10230/47626
  • dc.language.iso eng
  • dc.publisher Springer
  • dc.relation.ispartof Hutter F, Kersting K, Lijffijt J, Valera I, editors. ECML PKDD 2020: Machine Learning and Knowledge Discovery in Databases; 2020 Sep 14-18; Ghent, Belgium. Cham: Springer; 2020. p. 162-77. (LNCS; no. 12457)
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/780788
  • dc.rights © Springer The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-030-67658-2_10
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.subject.keyword Algorithmic discriminationen
  • dc.subject.keyword Algorithmic fairnessen
  • dc.subject.keyword Poisoning attacksen
  • dc.subject.keyword Adversarial machine learningen
  • dc.subject.keyword Machine learning securityen
  • dc.title Poisoning attacks on algorithmic fairnessen
  • dc.type info:eu-repo/semantics/conferenceObject
  • dc.type.version info:eu-repo/semantics/acceptedVersion