Knott, AlistairPedreschi, DinoChatila, RajaChakraborti, TapabrataLeavy, SusanBaeza Yates, RicardoEyers, DavidTrotman, AndrewTeal, Paul D.Biecek, PrzemyslawRussell, StuartBengio, Yoshua2025-04-282025-04-282023Knott A, Pedreschi D, Chatila R, Chakraborti T, Leavy S, Baeza-Yates R, et al. Generative AI models should include detection mechanisms as a condition for public release. Ethics Inf Technol. 2023 Dec;25(4):55. DOI: 10.1007/s10676-023-09728-41388-1957http://hdl.handle.net/10230/70210The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.application/pdfengThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.Generative AI models should include detection mechanisms as a condition for public releaseinfo:eu-repo/semantics/articlehttp://dx.doi.org/10.1007/s10676-023-09728-4Generative AIAI regulationAI ethicsAI social impactsFoundation modelsinfo:eu-repo/semantics/openAccess