Face de-identification using diffusion models
Face de-identification using diffusion models
Enllaç permanent
Descripció
Resum
In an era of increasing surveillance and data sharing, protecting individuals’ identities in visual data has become a critical concern. This thesis addresses the problem of face de-identification using diffusion models, a powerful class of generative models capable of synthesizing high-fidelity images. We propose two novel approaches that modify the denoising trajectory of diffusion models through two different identity-independent and identity dependent guidance regimes to generate de-identified facial images while preserving essential attributes such as expression and gender. Our method lever ages pre-trained identity recognition models and preconditioned guidance to balance visual realism/quality and privacy. We evaluate our approach on four benchmark datasets—RaFD, XM2VTS, LFW, and CALFW—using metrics such as AUC, F1 Score, Fr ́echet Distance, and Mean Squared Error. The results demonstrate comparable performance with state-of-the-art techniques in both identity removal effectiveness and attribute preservation. Additionally, we analyze the behavior of the diffusion process under different guidance strategies, providing insight into the trade-offs between identity concealment and image quality.Descripció
Treball fi de màster de: Erasmus Mundus joint Master in Artificial Intelligence (EMAI)
Mentor: Asst. Prof. dr. Blaz Meden Co-mentor: Asst. Prof. dr. Ziga Emersic
