The New York Times has severed ties with a freelancer over AI-assisted plagiarism, highlighting growing tensions and standards struggles as newsrooms increasingly rely on artificial intelligence tools amidst concerns over originality and trust.

The latest sign of the uneasy peace between journalism and artificial intelligence came not from a research lab or a product launch, but from a book review. According to The Guardian, The New York Times cut ties with freelance writer Alex Preston after discovering that AI had been used in the drafting of a review that also bore similarities to a Guardian piece on the same title. The Times, the report said, treated the matter as a breach of editorial standards.

The incident matters because it landed just as more journalists have begun speaking openly about using AI in their day-to-day work. The Wall Street Journal recently profiled Fortune business editor Nick Lichtenberg, who has used AI to accelerate his output, while Wired highlighted several prominent reporters who now rely on the tools for editorial tasks, including some writing assistance. That shift suggests a broader normalisation of AI in newsrooms, even if many editors and reporters still regard it with suspicion.

But the Preston case also showed how brittle that acceptance remains. The Wrap reported that he admitted to using AI to help draft the review, while other accounts said the overlap with the Guardian article triggered an internal review at the Times. However the mechanics are described, the message for publishers is the same: a single lapse can quickly harden into a public trust problem, especially when AI is involved in work that depends on originality and attribution.

The fallout has already reached beyond the freelancer himself. Axios reported that union leaders at The New York Times sent management a letter saying the paper’s AI standards are too vague and inadequate, using the plagiarism episode to press for clearer rules. That wider debate is likely to intensify as media companies push deeper into AI, even as they insist the technology must remain bounded by strict editorial oversight. In that sense, the scandal is less an isolated mistake than a warning about how far newsrooms can go in embracing AI before trust snaps.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The incident involving Alex Preston’s use of AI in drafting a book review for The New York Times was reported by The Guardian on 31 March 2026. The Media Copilot article, dated 21 April 2026, references this event, indicating that the content is relatively fresh. However, the Media Copilot article appears to be a commentary piece rather than a direct news report, which may affect its freshness score.

Quotes check

Score:
7

Notes:
The Media Copilot article includes direct quotes from The Guardian’s reporting, such as Alex Preston’s statement: “I made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in.” While these quotes are attributed to The Guardian, the Media Copilot article does not provide direct links to the original sources, making independent verification challenging.

Source reliability

Score:
6

Notes:
The Media Copilot article is published on a Substack platform, which is a self-publishing service. This raises concerns about the editorial oversight and fact-checking processes in place. The article references The Guardian and The Wrap, both reputable sources, but the lack of direct links to these sources in the Media Copilot article diminishes its overall reliability.

Plausibility check

Score:
9

Notes:
The reported incident of Alex Preston using AI to draft a book review for The New York Times, which led to the discovery of similarities with a Guardian review, is plausible and aligns with known events. The involvement of AI in content creation and the subsequent issues of plagiarism are consistent with current discussions in the media industry.

Overall assessment

Verdict (FAIL, OPEN, PASS): FAIL

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The Media Copilot article presents a commentary on the incident involving Alex Preston’s use of AI in drafting a book review for The New York Times. While the event is plausible and aligns with known facts, the article’s reliance on secondary sources without direct links, its categorisation as a commentary piece, and concerns about the reliability and independence of the source diminish its overall credibility. Given these factors, the content does not meet the standards for factual reporting and is not covered under our indemnity.

Share.
Exit mobile version