An analysis of generative methods for multiple image inpainting
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Ballester, Coloma
- dc.contributor.author Bugeau, Aurélie
- dc.contributor.author Hurault, Samuel
- dc.contributor.author Parisotto, Simone
- dc.contributor.author Vitoria, Patricia
- dc.date.accessioned 2025-03-24T14:23:12Z
- dc.date.available 2025-03-24T14:23:12Z
- dc.date.issued 2023
- dc.description.abstract Image inpainting refers to the restoration of an image with missing regions in a way that is not detectable by the observer. The inpainting regions can be of any size and shape. This is an ill-posed inverse problem that does not have a unique solution. In this work, we focus on learning-based image completion methods for multiple and diverse inpainting which goal is to provide a set of distinct solutions for a given damaged image. These methods capitalize on the probabilistic nature of certain deep generative models to sample various solutions that coherently restore the missing content. Throughout the chapter, we will analyze the underlying theory and analyze the recent proposals for multiple inpainting. To investigate the pros and cons of each method, we present quantitative and qualitative comparisons, on common datasets, regarding both the quality and the diversity of the set of inpainted solutions. Our analysis allows us to identify the most successful generative strategies in both inpainting quality and inpainting diversity. This task is closely related to the learning of an accurate probability distribution of images. Depending on the dataset in use, the challenges that entail the training of such a model will be discussed through the analysis.
- dc.description.sponsorship PV, CB and AB acknowledge the EU Horizon 2020 research and innovation programme NoMADS (Marie Skłodowska-Curie grant agreement No 777826). SP acknowledges the Leverhulme Trust Research Project Grant “Unveiling the invisible: Mathematics for Conservation in Arts and Humanities”. CB and PV also acknowledge partial support by MICINN/FEDER UE project, ref. PGC2018-098625-B-I00, and RED2018-102511-T. AB also acknowledges the French Research Agency through the PostProdLEAP project (ANR-19-CE23-0027-01). SH acknowledges the French ministry of research through a CDSN grant of ENS Paris-Saclay
- dc.format.mimetype application/pdf
- dc.identifier.citation Ballester C, Bugeau A, Hurault S, Parisotto S, Vitoria P. An analysis of generative methods for multiple image inpainting. In: Chen K, Schönlieb CB, Tai XC, Younes L, editors. Handbook of mathematical models and algorithms in computer vision and imaging. Cham: Springer; 2023. p. 773–820. DOI:10.1007/978-3-030-98661-2_119
- dc.identifier.isbn 9783030986605
- dc.identifier.uri http://hdl.handle.net/10230/69998
- dc.language.iso eng
- dc.publisher SpringerNature
- dc.relation.ispartof Chen K, Schönlieb CB, Tai XC, Younes L, editors. Handbook of mathematical models and algorithms in computer vision and imaging. Cham: Springer; 2023.
- dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/777826
- dc.relation.projectID info:eu-repo/grantAgreement/ES/2PE/PGC2018-098625
- dc.relation.projectID info:eu-repo/grantAgreement/ES/2PE/RED2018-102511
- dc.rights © SpringerNature This is a author's accepted manuscript of: Ballester C, Bugeau A, Hurault S, Parisotto S, Vitoria P. An analysis of generative methods for multiple image inpainting. In: Chen K, Schönlieb CB, Tai XC, Younes L, editors. Handbook of mathematical models and algorithms in computer vision and imaging. Cham: Springer; 2023. p. 773–820. The final version is available online at: http://dx.doi.org/10.1007/978-3-030-98661-2_119
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.subject.keyword Inverse problems
- dc.subject.keyword Inpainting
- dc.subject.keyword Multiple inpainting
- dc.subject.keyword Diverse inpainting
- dc.subject.keyword Deep learning
- dc.subject.keyword Generative methods
- dc.title An analysis of generative methods for multiple image inpainting
- dc.type info:eu-repo/semantics/bookPart
- dc.type.version info:eu-repo/semantics/acceptedVersion