Demo

Mronline has adopted Anubis, a proof-of-work layer inspired by Hashcash, to deter large-scale AI scraping, creating a new front in web content protection that risks increased friction for genuine users.

The page now appearing on Mronline is less a conventional article than a warning: access to the site is being filtered through Anubis, a proof-of-work layer designed to frustrate automated scraping. The message says the system is there because large-scale AI crawlers have become heavy enough to strain services, and that the aim is to make bulk extraction of content costly while leaving ordinary readers largely unaffected.

Anubis draws on the older Hashcash concept, which Adam Back proposed in 1997 as a way to make spam less economical by forcing senders to do computational work before delivery. In the same spirit, Anubis asks a visitor’s browser to complete a challenge that is easy to check but expensive to mass-produce, turning scale itself into the obstacle for bots. Documentation and related discussions describe it as open-source software built for precisely this kind of front-line defence, especially on sites that want to keep material available without giving free rein to scraping tools.

The notice also underlines a practical trade-off that has become common across the web: stronger defences can mean friction for legitimate users, particularly if their browsers lack modern JavaScript support or rely on privacy tools that interfere with the challenge. That tension helps explain why systems such as Anubis are increasingly being used as a temporary barrier rather than a perfect solution, buying time for site operators while they decide how to respond to aggressive automated access.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The article discusses Anubis, a proof-of-work system designed to prevent AI web scraping. Anubis was released on January 19, 2025, and has been adopted by various platforms since then. ([en.wikipedia.org](https://en.wikipedia.org/wiki/Anubis_%28software%29?utm_source=openai)) The article’s publication date is April 20, 2026, indicating that the content is over a year old. While the concept of Anubis is not new, the article provides a recent perspective on its implementation and effectiveness.

Quotes check

Score:
7

Notes:
The article includes direct quotes from various sources. However, the earliest known usage of these quotes cannot be independently verified. Without confirmation of the original sources, the authenticity of these quotes remains uncertain.

Source reliability

Score:
6

Notes:
The article originates from Mronline, a platform that aggregates content from various sources. While it cites reputable sources such as Wikipedia and GitHub, the platform itself is not a major news organisation. The reliance on aggregated content raises concerns about the independence and originality of the information presented.

Plausibility check

Score:
8

Notes:
The claims about Anubis’s effectiveness in blocking AI web scraping are plausible and align with known information about the software. Anubis has been reported to successfully mitigate bot traffic and prevent outages for various platforms. ([dukespace.lib.duke.edu](https://dukespace.lib.duke.edu/server/api/core/bitstreams/816ef134-55cf-49f6-9a8b-1e8a2324b1ff/content?utm_source=openai)) However, the article does not provide specific examples or recent data to substantiate these claims.

Overall assessment

Verdict (FAIL, OPEN, PASS): FAIL

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The article provides an overview of Anubis and its role in combating AI web scraping. However, the reliance on aggregated content from a non-major news organisation, the inability to independently verify quotes, and the lack of original reporting raise significant concerns about the article’s credibility. Given these issues, the content does not meet the necessary standards for publication.

[elementor-template id="4515"]
Share.