Generative AI models should include detection mechanisms as a condition for public release
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Knott, Alistair
- dc.contributor.author Pedreschi, Dino
- dc.contributor.author Chatila, Raja
- dc.contributor.author Chakraborti, Tapabrata
- dc.contributor.author Leavy, Susan
- dc.contributor.author Baeza Yates, Ricardo
- dc.contributor.author Eyers, David
- dc.contributor.author Trotman, Andrew
- dc.contributor.author Teal, Paul D.
- dc.contributor.author Biecek, Przemyslaw
- dc.contributor.author Russell, Stuart
- dc.contributor.author Bengio, Yoshua
- dc.date.accessioned 2025-04-28T06:17:54Z
- dc.date.available 2025-04-28T06:17:54Z
- dc.date.issued 2023
- dc.description.abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.en
- dc.description.sponsorship We are grateful to the Global Partnership on AI (GPAI) and the Montreal International Center of Expertise in Artificial Intelligence (CEIMIA) for supporting this project. Yoshua Bengio thanks CIFAR and NSERC; Stuart Russell thanks Open Philanthropy Foundation for support of the Center for Human-Compatible AI at UC Berkeley. Dino Pedreschi has been supported by the European Commission under the NextGeneration Programme PE0013 - ‘FAIR - Future Artificial Intelligence Research’ Spoke 1, ‘Human-Centered AI’, and under the EU H2020 ICT-48 Network of Excellence n.952026 ‘Human-AI net’. We would like to thank colleagues at the EU (Lucilla Sioli’s group at DG-CNECT) and the OECD (Sebastian Hallensleben’s group) for useful discussions, and Marcin Betkier and Rebecca Downes for comments on an earlier draft of this article. We also thank the two anonymous reviewers for their useful comments. The views expressed here, and any remaining errors, are of course our own.en
- dc.format.mimetype application/pdf
- dc.identifier.citation Knott A, Pedreschi D, Chatila R, Chakraborti T, Leavy S, Baeza-Yates R, et al. Generative AI models should include detection mechanisms as a condition for public release. Ethics Inf Technol. 2023 Dec;25(4):55. DOI: 10.1007/s10676-023-09728-4
- dc.identifier.doi http://dx.doi.org/10.1007/s10676-023-09728-4
- dc.identifier.issn 1388-1957
- dc.identifier.uri http://hdl.handle.net/10230/70210
- dc.language.iso eng
- dc.publisher Springer
- dc.relation.ispartof Ethics and Information Technology. 2023 Dec;25(4):55
- dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/952026
- dc.rights This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.rights.uri http://creativecommons.org/licenses/by/4.0/
- dc.subject.keyword Generative AIen
- dc.subject.keyword AI regulationen
- dc.subject.keyword AI ethicsen
- dc.subject.keyword AI social impactsen
- dc.subject.keyword Foundation modelsen
- dc.title Generative AI models should include detection mechanisms as a condition for public releaseen
- dc.type info:eu-repo/semantics/article
- dc.type.version info:eu-repo/semantics/publishedVersion