Image captioning for digram geometry specification
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Giorgi, Matteo
- dc.date.accessioned 2025-10-20T13:26:53Z
- dc.date.available 2025-10-20T13:26:53Z
- dc.date.issued 2025
- dc.description Treball fi de màster de: Master in Intelligent Interactive Systems
- dc.description Supervisor: Carlos Castillo
- dc.description.abstract AI hiring tools promise objectivity, but may deliver bias at scale. While algorithmic resume screening outperforms manual review by processing thousands of applications daily, these systems can perpetuate gender discrimination through learned patterns embedded in the data. This research challenges present data protection practices, suggesting that anonymization is insufficient against AI inference capabilities, and proposes an alternative approach: Adversarial Debiasing. Our three-phase experimental pipeline investigated whether language models can detect gender from anonymized resumes, gender bias extent in ICT classification, and adversarial debiasing effectiveness. Using the FINDHR dataset of donated resumes with protected attributes and LiveCareer dataset for pre-training, we analyzed gender predictability from anonymized data, implemented transfer learning, then applied adversarial training using gradient reversal architecture to enforce gender uninformativeness in ICT features. This research makes two primary contributions to fair AI in algorithmic hiring. First, we provide empirical evidence that standard anonymization fails against AI inference, with DistilBERT achieving 63.86% gender detection accuracy on anonymized resumes through systematic linguistic patterns in professional self-presentation. Second, we demonstrate adversarial debiasing effectiveness, achieving near-perfect gender uninformativeness, decreasing predictability from 63.10% to 51.19% (effectively random levels) while preserving 97.6% of ICT classification performance. Our lambda parameter optimization enabled transparent control over fairness-utility trade-offs. These results have strong regulatory and industry implications. Current anonymization standards appear inadequate against AI pattern detection capabilities, demanding updated privacy protection frameworks. Our adversarial debiasing approach offers promise by embedding fairness constraints within feature learning processes, addressing proxy discrimination at its source while maintaining operational utility.ENG
- dc.identifier.uri http://hdl.handle.net/10230/71577
- dc.language.iso eng
- dc.rights Llicència CC Reconeixement-NoComercial-CompartirIgual 4.0 Internacional (CC BY-NC-SA 4.0)
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.rights.uri https://creativecommons.org/licenses/by-nc-sa/4.0/
- dc.subject.other Aprenentatge automàtic
- dc.title Image captioning for digram geometry specification
- dc.type info:eu-repo/semantics/masterThesis
