Anthropic’s conversational AI, Claude, has experienced an unprecedented boost in user demand following the US Pentagon’s decision to blacklist the company over concerns about military applications, prompting a broader debate on AI ethics and national security.

Anthropic’s conversational AI, Claude, has seen a sudden spike in users after the Pentagon moved to blacklist the company last week amid a dispute over how its models might be used by the US military. The app shot to the top of Apple’s free app chart in the United States on Saturday, surpassing OpenAI’s ChatGPT, and climbed the Android rankings in both the US and the UK, reflecting a rapid shift in user attention. According to industry download data and reporting, ChatGPT remained strong in several markets even as Claude surged.

The company’s infrastructure struggled to keep pace with the influx. Early on Monday thousands of users reported interruptions to Anthropic’s services, with outage trackers logging heavy disturbance before the firm said the issues had been resolved by late morning. Anthropic described the spike in traffic as unprecedented demand for Claude and said the incident was short-lived.

Anthropic also reported that sign-ups and paid subscriptions have jumped since the dispute with the Pentagon began. “Every single day last week was an all time record for Claude sign-ups,” the company said in a statement, and it has promoted features designed to ease migration from rivals, including a memory tool available to paid users that can import prior conversations so new users pick up where they left off.

The confrontation with the defence establishment centres on limits Anthropic places on military use. Chief executive Dario Amodei has refused to remove prohibitions that would allow Claude to be used for mass domestic surveillance or to enable fully autonomous weapons, arguing both raise unacceptable ethical and constitutional issues. The US Defence Department, while denying any intent to employ AI unlawfully, has said it must preserve the right to use contractor technology for “all lawful purposes,” a position that precipitated the government’s decision to cut ties.

The vacuum created by that break with Anthropic was quickly filled by OpenAI, which reached an agreement with the federal government after talks with Anthropic stalled. OpenAI’s chief executive, Sam Altman, said the company negotiated explicit limits disallowing use of its systems for autonomous lethal systems or mass surveillance. The contract has since been revised to more clearly distinguish between private personal data and commercially acquired or public information, a change officials say strengthens civil-liberties protections.

The dispute has prompted high-level comment from across the technology and national-security communities. Nvidia’s chief executive characterised the clash as troubling but not catastrophic for the industry, while retired US intelligence and cyber commanders have warned that branding a single American AI firm a supply-chain risk could damage long-running efforts to sustain trust between the Pentagon and Silicon Valley.

As the political fight continues, the episode underscores a wider dilemma for policymakers and companies: whether national security needs or tech firms’ ethical lines should shape the terms under which powerful AI models are developed and deployed. Government directives ordering agencies to stop using Anthropic’s products, and the parallel acceleration of contracts with other vendors, have intensified debate about how to balance operational imperatives with public concern over surveillance and autonomous weapons.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The article was published on 2 March 2026, which is within the past week, indicating high freshness. However, the events described have been reported by multiple sources, suggesting some information may be recycled. ([apnews.com](https://apnews.com/article/b72d1894bc842d9acf026df3867bee8a?utm_source=openai))

Quotes check

Score:
7

Notes:
The article includes direct quotes from Anthropic CEO Dario Amodei and other officials. While these quotes are consistent with statements from other sources, their exact origins are not specified, making independent verification challenging. ([apnews.com](https://apnews.com/article/9b28dda41bdb52b6a378fa9fc80b8fda?utm_source=openai))

Source reliability

Score:
9

Notes:
The Guardian is a reputable news organisation known for its investigative journalism. However, the article relies on information from other news outlets, which may affect its originality. ([apnews.com](https://apnews.com/article/b72d1894bc842d9acf026df3867bee8a?utm_source=openai))

Plausibility check

Score:
8

Notes:
The claims about Claude’s surge in popularity following the Pentagon’s actions are plausible and align with reports from other sources. However, the article does not provide specific data or independent verification of these claims. ([apnews.com](https://apnews.com/article/b72d1894bc842d9acf026df3867bee8a?utm_source=openai))

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The article is recent and from a reputable source, but it relies on information from other news outlets, which may affect its originality and the independence of its verification. Some claims lack specific data or independent verification, and the exact origins of quotes are not specified, making independent verification challenging. ([apnews.com](https://apnews.com/article/b72d1894bc842d9acf026df3867bee8a?utm_source=openai))

Share.
Exit mobile version