Image captioning for digram geometry specification

Enllaç permanent

Descripció

  • Resum

    AI hiring tools promise objectivity, but may deliver bias at scale. While algorithmic resume screening outperforms manual review by processing thousands of applications daily, these systems can perpetuate gender discrimination through learned patterns embedded in the data. This research challenges present data protection practices, suggesting that anonymization is insufficient against AI inference capabilities, and proposes an alternative approach: Adversarial Debiasing. Our three-phase experimental pipeline investigated whether language models can detect gender from anonymized resumes, gender bias extent in ICT classification, and adversarial debiasing effectiveness. Using the FINDHR dataset of donated resumes with protected attributes and LiveCareer dataset for pre-training, we analyzed gender predictability from anonymized data, implemented transfer learning, then applied adversarial training using gradient reversal architecture to enforce gender uninformativeness in ICT features. This research makes two primary contributions to fair AI in algorithmic hiring. First, we provide empirical evidence that standard anonymization fails against AI inference, with DistilBERT achieving 63.86% gender detection accuracy on anonymized resumes through systematic linguistic patterns in professional self-presentation. Second, we demonstrate adversarial debiasing effectiveness, achieving near-perfect gender uninformativeness, decreasing predictability from 63.10% to 51.19% (effectively random levels) while preserving 97.6% of ICT classification performance. Our lambda parameter optimization enabled transparent control over fairness-utility trade-offs. These results have strong regulatory and industry implications. Current anonymization standards appear inadequate against AI pattern detection capabilities, demanding updated privacy protection frameworks. Our adversarial debiasing approach offers promise by embedding fairness constraints within feature learning processes, addressing proxy discrimination at its source while maintaining operational utility.
  • Descripció

    Treball fi de màster de: Master in Intelligent Interactive Systems
    Supervisor: Carlos Castillo
  • Mostra el registre complet