Disclosure of AI-Use in News Production: Effects on Audience Trust and News Credibility
Abstract
The utilization of AI in newsrooms is growing, and it is becoming a question of whether and how this information should be disclosed. The preregistered between-subjects online experiment (N=500) we report involved Disclosure (none vs disclosed) and Task-Type (assistive vs generative), exploratory variations in wording (assisted by AI vs written by AI), and placement variations (byline vs endnote). Article trust and news credibility were the primary results, perceived transparency was a mediator, and preregistered moderators were AI familiarity, media skepticism, political ideology, and topic involvement. Disclosure was reliable in enhancing transparency (a-path) but had a small negative impact on trust and credibility, and larger penalties in case AI was made to sound generative. The mediation showed both positive indirect with transparency and negative direct effects of disclosure with small net decreases. First stage moderation demonstrated greater transparency gains among AI-familiar viewers and lesser gains among media-skeptical viewers, second stage moderation demonstrated transparency less translated to trust in right-leaning respondents and less translated to credibility under high involvement. Wording aided was better than written, and the location of endnotes was safer overall.
Keywords: AI disclosure; transparency; trust; news credibility; automated journalism; algorithm aversion; wording/labeling; human–AI collaboration; media skepticism; political ideology.


