As AI-generated works spark over 90 lawsuits, courts’ insistence on human authorship may radically shape the profitability and longevity of AI in creative sectors, with major industry players now navigating the complex legal landscape.
The fiercest legal fight around artificial intelligence is not only about whether companies scraped books, songs and images to train models. According to The Atlantic, the deeper issue is whether works made by AI can themselves be protected by copyright, because that question will shape who can actually make money from generative tools in the years ahead. The article notes that more than 90 lawsuits have already been filed by creators and publishers against AI firms, but says those cases may matter less to the creative economy than the simpler question of whether machine-made output can be owned at all.
That question was sharpened by Thaler v. Perlmutter, in which the US Court of Appeals for the District of Columbia Circuit ruled that copyright law requires a human author. Legal summaries of the case say Stephen Thaler’s autonomous AI system, the Creativity Machine, produced the image at issue, and the Copyright Office refused registration because the work lacked human authorship. The Supreme Court declined to take the case in March 2026, leaving the lower court ruling in place. The practical result is that, for now, fully machine-generated works remain outside copyright protection in the United States.
For the media and entertainment industries, that matters because their business models depend on owning and licensing intellectual property. Film studios, labels and publishers earn money through distribution, adaptation, merchandising and sublicensing, which all rely on enforceable rights. The Atlantic argues that this creates a powerful commercial reason for large companies to keep people in the creative chain. Netflix has already warned against using AI to create central story elements without approval, while Hachette’s withdrawal of Shy Girl after allegations that parts were AI-written showed how wary publishers have become.
The same logic also helps explain why some AI content ventures have struggled to find a durable place in the market. The Atlantic pointed to the collapse of OpenAI’s video tool, Sora, as evidence that it is risky to spend heavily on systems that generate material without clear copyright value. That said, OpenAI has since said it struck a three-year licensing agreement with Disney in December 2025, including access to characters from Disney, Marvel, Pixar and Star Wars, alongside a major equity investment by Disney. The contrast suggests the sector still sees value in AI, but only when it is tied to licensed material and commercially usable rights.
The broader point, according to The Atlantic, is that copyright law may do more to slow mass replacement of human creators than the headline-grabbing lawsuits over training data. If courts continue to insist on human authorship, studios, labels and publishers will have to keep employing writers, performers, artists and musicians to preserve the value of the works they exploit. At the same time, the next legal battle will be over how much human involvement is enough for an AI-assisted work to qualify, a line that the Copyright Office has indicated should not be crossed by prompting alone. How that question is answered will help determine whether AI remains a tool for creators or becomes a substitute for them.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
7
Notes:
The article from The Atlantic was published on April 30, 2026, which is recent. However, the content references events from December 2025 and March 2026, indicating that the core information is not entirely fresh. The article also cites multiple sources, including press releases and legal updates, which may have been previously reported elsewhere. The reliance on these sources suggests that the article may not be entirely original. Given these factors, the freshness score is moderate.
Quotes check
Score:
6
Notes:
The article includes direct quotes from various sources. However, some of these quotes appear to be reused from earlier publications, raising concerns about originality. Additionally, the article does not provide direct links to the original sources of these quotes, making independent verification challenging. The lack of verifiable sources for some quotes reduces the credibility of the information presented.
Source reliability
Score:
8
Notes:
The Atlantic is a reputable publication known for its in-depth reporting. However, the article relies on multiple external sources, including press releases and legal updates, which may have their own biases or limitations. The absence of direct links to some of these sources makes it difficult to assess their reliability fully. While The Atlantic is generally trustworthy, the reliance on external sources without direct verification raises some concerns.
Plausibility check
Score:
7
Notes:
The article discusses the implications of AI-generated content on copyright law, referencing recent legal cases and industry developments. While these events are plausible and align with current trends in the industry, the article does not provide sufficient independent verification for some of the claims made. The reliance on external sources without direct links or verification reduces the overall plausibility of the information presented.
Overall assessment
Verdict (FAIL, OPEN, PASS): CONDITIONAL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article from The Atlantic provides a timely analysis of AI-generated content and its implications for copyright law. While the publication date is recent, the content references events from December 2025 and March 2026, indicating that the core information may not be entirely fresh. The article relies on multiple external sources, including press releases and legal updates, some of which are not directly linked, making independent verification challenging. Given these factors, the overall assessment is CONDITIONAL, with a MEDIUM confidence level.
