Transparency –encompassing methodological, financial, and
source-related aspects, as well as the tools employed– is
central to the operations of professional fact-checking
platforms. However, the growing adoption of artificial
intelligence tools in fact-checking introduces new ethical
challenges. This research investigates the extent to which
these platforms believe they should disclose their use of AI and
assesses the current practices on their websites regarding this
technology. The study ...
Transparency –encompassing methodological, financial, and
source-related aspects, as well as the tools employed– is
central to the operations of professional fact-checking
platforms. However, the growing adoption of artificial
intelligence tools in fact-checking introduces new ethical
challenges. This research investigates the extent to which
these platforms believe they should disclose their use of AI and
assesses the current practices on their websites regarding this
technology. The study employs a qualitative methodology,
including semi-structured interviews with professionals from
accredited Spanish verification platforms and content analysis
of these organizations’ websites. The findings indicate that
transparency in AI usage is widely regarded as an ethical
imperative. Nevertheless, the application of this standard often
becomes ambiguous when addressing specific practices and
cases. Many professionals question the necessity of explicitly
disclosing AI usage when the technology primarily supports
the verification and is overseen by human reviewers.
Additionally, a lack of understanding of AI’s functionality
sometimes hinders the ability to identify whether the tools
employed incorporate AI. The content analysis also reveals
that explicit mentions of AI use on the websites are rare and
that platforms lack open-access manuals or protocols that
outline and regulate their AI practices.
+