Recent data from Cloudflare, a major cloud services provider specialising in enhancing website security, reveals a striking reality about the current state of internet traffic: approximately one-third of online activity is generated by bots. These automated programs perform a variety of functions behind the scenes—crawling websites to index content, executing specific tasks, or gathering data to train artificial intelligence (AI) models. While much of this bot traffic is invisible to everyday internet users and does not engage in direct interaction, its sheer volume has significant implications for the authenticity and economy of the web.

The surge in bot activity has sparked widespread concern about the integrity of online spaces. Among the voices expressing apprehension is Sam Altman, CEO of OpenAI, a leading AI company. Speaking on platforms such as Reddit and X (formerly Twitter), Altman lamented the “fake” feeling permeating social media networks, attributing it to the proliferation of AI-driven accounts. Despite his position at the forefront of AI development and the genuine success of OpenAI’s tools like Codex, Altman acknowledged that much of the interaction on these platforms now feels artificial, echoing a notion known as the “Dead Internet Theory.” This theory suggests that a significant proportion of online content is produced or managed by bots, eroding the presence of genuine human engagement.

Originating around 2021 and gaining traction before the widespread use of large language models (LLMs) such as ChatGPT, the “Dead Internet Theory” frames internet content as increasingly synthetic. Some proponents view this as part of a broader conspiracy to control human behaviour online, while a more sober analysis points to economic incentives. Websites and platforms profit from engagement regardless of quality, which means the flood of bot-generated content—no matter how superficial—can be financially advantageous. Data indicates that web traffic from bots may actually have surpassed that from humans even prior to the AI boom, causing concerns about the accuracy of detection systems which sometimes confuse bot and human behaviour.

Adding complexity to this landscape, Cloudflare recently introduced a tool allowing website owners to block unauthorized AI bot crawlers or charge fees for access when AI firms extract content to train their models. This move responds to a shift where traditional web traffic is declining because AI systems often retrieve content without sending users back to the original sites, harming advertising revenue and content creator recognition. Notable publishers including Condé Nast and the Associated Press have endorsed this initiative, and some parties, like The New York Times and Reddit, are pursuing copyright infringement litigation against AI companies exploiting web content without agreements. The scale of this issue is underscored by stark ratios: Google’s crawl-to-referral rate has dropped from 6:1 to 18:1 in six months, while OpenAI’s rate is an extraordinary 1,500:1, signalling extensive data extraction with limited reciprocity.

Tensions around AI crawling practices have also surfaced publicly. Cloudflare accused AI firm Perplexity of deliberately circumventing web restrictions by altering user agents and rotating IP addresses to scrape content from sites that had explicitly banned such activity. In response, Perplexity denied wrongdoing, attributing flagged scraping to third-party services and arguing their technology works by answering queries in real time rather than traditional scraping of stored web content. This dispute highlights the blurred lines and ethical dilemmas facing internet infrastructure providers and AI developers as they navigate control and transparency in digital content use.

The growth of human-like bots is increasingly blurring the boundaries of authenticity online. Companies like OpenAI are now deploying AI agents capable of sophisticated web interactions, further complicating the landscape. Altman has warned about AI’s ability to bypass human verification measures almost entirely, predicting a surge in AI-enabled scams. Yet, despite recognising these risks, he has shown no intention of halting AI development, reflecting confidence in the technology’s potential alongside its challenges.

Intriguingly, some speculate that Altman’s concerns about the authenticity crisis may also be connected to his promotion of another project, Worldcoin. This initiative combines biometric iris scanning with cryptocurrency to verify human identity, potentially serving as a countermeasure to the rising tide of AI-generated content and bot activity online. The concept aligns with proposals Altman has made to platforms like Reddit, advocating user authentication to combat the flood of fake accounts. Should the “death” of authentic internet interaction accelerate, solutions such as Worldcoin might be positioned as a new digital safeguard, heralding what some describe as the start of a “new internet order.”

Meanwhile, the internet remains under strain not only from bot proliferation but also from increasingly frequent and intense distributed denial-of-service (DDoS) attacks. Cloudflare recently mitigated a record-breaking 11.5 terabits-per-second attack, predominantly sourced from Internet of Things (IoT) devices and cloud services, illustrating the scale of challenges facing internet infrastructure in maintaining reliability amid growing cyber threats.

In sum, the internet is at a crossroads. The ongoing flood of bot-driven traffic, amplified by AI’s rapid evolution, challenges the authenticity of online spaces and traditional digital economies. Efforts to reclaim control—through new monetisation tools, legal actions, authentication projects, and technological safeguards—reflect broad concern about preserving genuine human connection on the web. Yet as industry leaders like Altman show, balancing innovation with integrity will require navigating complex ethical and technical terrains that define the future of digital interaction.

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The narrative presents recent data from Cloudflare, indicating that approximately one-third of online activity is generated by bots. This aligns with Cloudflare’s 2024 Year in Review report, which found that 68.5% of observed bot traffic came from the top 10 countries, with the United States responsible for half of that total. ([blog.cloudflare.com](https://blog.cloudflare.com/radar-2024-year-in-review/?utm_source=openai)) The article also references Sam Altman’s concerns about AI-driven accounts on social media platforms, echoing discussions from September 2025. ([techcrunch.com](https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-are-making-social-media-feel-fake/?utm_source=openai)) The inclusion of recent data and events suggests a high freshness score.

Quotes check

Score:
9

Notes:
The article includes direct quotes from Sam Altman, CEO of OpenAI, expressing concerns about the proliferation of AI-driven accounts on social media platforms. These quotes are consistent with statements he made in September 2025, as reported by TechCrunch. ([techcrunch.com](https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-are-making-social-media-feel-fake/?utm_source=openai)) The consistency of the quotes with known statements and the absence of significant variations in wording support a high score.

Source reliability

Score:
7

Notes:
The narrative originates from Gizmodo Japan, a reputable technology news outlet. However, the article is published in Japanese, which may limit accessibility for some readers. The reliance on a single source for the majority of the information introduces a degree of uncertainty.

Plausability check

Score:
8

Notes:
The claims about the prevalence of bot-generated content and AI-driven accounts on social media platforms are plausible and supported by recent reports from organizations like Cloudflare and statements from industry leaders like Sam Altman. The narrative provides specific data points and references to support its claims, enhancing its credibility.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): HIGH

Summary:
The narrative presents recent and plausible information about the prevalence of bot-generated content and AI-driven accounts on social media platforms. The inclusion of direct quotes from Sam Altman and references to reputable sources like Cloudflare’s 2024 Year in Review report support the credibility of the information. While the reliance on a single source introduces some uncertainty, the overall assessment is positive.

Share.
Exit mobile version