Amanda Hoover’s personal experiment to outsource her reporting highlights AI’s proficiency in surface-level tasks but underscores its struggle with the nuanced skills essential to journalism’s human core, raising questions about the technology’s evolving role in newsrooms.
In an era when executives warn that artificial intelligence could reshape white-collar work, Amanda Hoover decided to test the claim in the most personal way possible: by trying to outsource her own reporting job to a machine. Goldman Sachs has estimated that AI could expose hundreds of millions of jobs to automation over the next decade, while also creating new work in areas that support the technology. In journalism, the pressure is already visible. A Muck Rack survey published in March found that 82% of journalists now use AI in some part of their workflow, even as concern about unchecked use has grown.
Hoover’s experiment, published by Business Insider, was designed to see how far consumer AI tools could go in reproducing the core tasks of reporting: interviewing sources, shaping a story and filing a readable draft. She used voice-generation software to build an AI agent in her own voice, then set it loose on pre-selected sources to ask questions about AI’s role in journalism. The idea was partly comic and partly unsettling, but it also reflected a wider shift in newsrooms, where AI is increasingly being treated not just as a novelty but as infrastructure for repetitive work.
What Hoover found was that the technology could imitate surface-level reporting far better than the deeper skills that make interviews work. The bot could ask questions and keep a conversation moving, but it struggled with silence, nuance and follow-up. Sources told Hoover afterwards that the agent felt overly flattering and oddly eager to move on, rather than pressing for detail. Ben Colman of Reality Defender described the experience as more synthetic than a fake voice alone would suggest, while Gab Ferree of the communications group Off the Record said the pauses and interruptions that shape human conversation are exactly where AI falls apart.
The limits became even clearer when Hoover fed the interview transcripts back into ChatGPT and asked it to draft an 800-word essay. The result, she wrote, could assemble quotations and produce a coherent structure, but it also relied on stock phrasing, exaggerated transitions and a treatment of source material that sometimes stripped away context. Her editor then reviewed the draft and pushed back. When Hoover’s bot joined a Slack exchange about revisions, it resisted suggestions and argued that the story should remain broad rather than becoming more personal. The moment underlined a central tension: AI could mimic the mechanics of editorial work, but not the human judgment behind it.
That conclusion sits uneasily alongside the optimism found in some industry discussions. Analysts and journalism scholars increasingly describe AI as a tool for handling routine processes, from transcription to summarisation, so that reporters can spend more time on verification, interviews and analysis. But the Atlantic has recently noted growing anxieties about AI-generated language creeping into respected media brands, reinforcing calls for clearer policies and stronger editorial oversight. Hoover’s experiment landed in the middle of that debate, showing both why newsrooms are adopting AI and why many journalists remain wary of letting it move beyond assistance.
For Hoover, the most useful parts of the experiment were not the voice clone or the draft itself, but the transcription and analysis tools that sped up the laborious side of reporting. Even so, she concluded that the hardest parts of journalism remain stubbornly human: earning trust, reading the room, knowing when to pause and pressing for what a source has not yet said. If AI is going to replace reporters, her story suggests, it will need to do more than sound convincing. It will need to think, wait and doubt like one.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The article was published on May 1, 2026, and no substantially similar content was found online prior to this date. The narrative appears original and fresh.
Quotes check
Score:
10
Notes:
The article includes direct quotes from Amanda Hoover’s AI agent, ‘Amanda Bot’. No identical quotes were found in earlier material, indicating originality. However, the AI-generated nature of these quotes means they cannot be independently verified.
Source reliability
Score:
10
Notes:
The article is authored by Amanda Hoover, a Senior Correspondent at Business Insider, a reputable news organisation. Hoover’s previous work has been featured in notable publications such as Morning Brew and WIRED. ([muckrack.com](https://muckrack.com/amanda-hoover?utm_source=openai))
Plausibility check
Score:
9
Notes:
The experiment described is plausible, given the increasing integration of AI in journalism. However, the AI-generated quotes cannot be independently verified, which slightly diminishes the overall credibility.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article presents an original and plausible account of an AI experiment conducted by the author. While the content is fresh and authored by a reputable journalist, the AI-generated quotes cannot be independently verified, and the lack of external verification sources slightly diminishes the overall reliability. Therefore, the content passes the fact-check with medium confidence.

