Who Profits When AI Creates a Blockbuster?

The legal mess when everyone claims credit, cash, and no one's clearly wrong

December 5, 202511 min read
Who Profits When AI Creates a Blockbuster?

Once upon a time, an AI-generated song hits number one on Spotify.

It's everywhere.

TikTok dances. Radio plays. Millions of streams pouring in.
The royalty cheque is massive.

Then everyone shows up to claim it.

The person who typed the prompt wants payment because, hey, they created it.

The AI company wants their cut because their tool made it possible.

The musicians whose voices trained the algorithm want compensation because that's basically their sound.

The music publishers whose catalogues fed the system want damages because their songs were used without permission.

It's like five people showing up to the same restaurant table, all holding reservations for the same seat, all absolutely certain they booked first. But who actually gets paid? Nobody knows.

Welcome to the legal grey zone, where judges are literally making up copyright rules for AI as they go, and nobody's quite sure who owns what anymore.

The Contract that AI Never Signed

Photo by Money Knack on Unsplash

Traditional copyright is ridiculously simple. If you create something original with your own effort, you own it. That creation gets legal protection. Others can't copy it without permission. You can sell it, license it, or sue people who steal it.

It's basically a social contract. Society says "You made this, so you get to control it and profit from it." In exchange, after a set period (currently life plus 70 years), it enters the public domain for everyone to use.

This worked reasonably well for centuries. Writers owned their books. Musicians owned their songs. Photographers owned their images. Even when technology evolved, things like sampling music or remixing art, the law adapted by extending existing principles.

But AI doesn't fit into this framework. At all.

Yes, we all know that AI creates things. But AI learned to create by studying everyone else's things—often without asking, paying, or even saying thanks.

It's like someone read every cookbook ever written, memorised the lot, and now sells their own recipes.
Technically, new recipes.
But also suspiciously familiar recipes.

The Big Three: Credit, Cash, and Culpability

I’ve read A LOT of news about AI disputes lately, and let me tell you, it’s like watching a bunch of toddlers fighting over a toy (but the toddlers here are tech giants, and the toy is our precious copyright law).

So I decided to play detective and find a pattern in this epic showdown. Let's break down the three battlegrounds where AI copyright is actually being fought.

Credit: Who Actually Made This?

Photo by Umberto on Unsplash

You craft a brilliant prompt. You iterate for hours. You guide the AI like a director guides actors. The result is genuinely impressive.

Who's the author?

Plot twist: probably nobody, legally speaking.

Copyright offices worldwide generally require human authorship. The US has rejected multiple AI artwork registrations. So your masterpiece might be legally ownerless—which means anyone can copy it, sell it, remix it.

Here's what's actually happening: creators are watermarking their AI work like paranoid photographers, documenting their prompts like court evidence, and adding "human-directed" disclaimers everywhere.

Meanwhile, their work gets screenshotted and reposted without credit approximately eleven seconds after it is posted.

The hilarious part? You could spend forty-five minutes perfecting a prompt, and someone screenshots your creation, posts it, and they get the viral credit. The internet remains undefeated.

Cash: Who Gets Paid?

Photo by Olga DeLawrence on Unsplash

This is where billions of dollars hang in the balance.

If content is fully AI-generated with no human creative input, U.S. Copyright Office guidance says it can't be copyrighted. That means it's essentially public domain.

Anyone can use it, profit from it, remix it, sell it. Nobody has exclusive rights. Which sounds very democratic and open until you realise it also means the person who created it can't stop others from profiting either.

But what if it's not fully AI-generated? What if a human spent hours crafting prompts, curating outputs, making creative decisions?

Courts are wrestling with this right now, and they're coming to different conclusions.

In September 2025, Anthropic AI settled a class-action lawsuit brought by authors for $1.5 billion. The authors alleged Anthropic used millions of copyrighted books to train its Claude chatbot without permission.

Anthropic will pay roughly $3,000 per book for an estimated 500,000 books. That's not payment for AI-generated content. That's payment for using copyrighted training data.

Meanwhile, music publishers including Universal Music are suing AI companies like Udio and Suno for training on copyrighted lyrics. In October 2024, Universal actually settled with Udio, an AI song generation platform, and they're now partnering on a music creation platform.

The terms weren't disclosed, but the message is clear: AI companies are starting to pay for training data.

Here's the scenario that probably happens in the future:

A screenwriter uses AI to generate a film script. The film becomes a blockbuster worth hundreds of millions of dollars.

And the things that might happen after that areas follows:

  • The AI company wants royalties for providing the tool.
  • The original authors whose works trained the model want compensation for their "stolen" intellectual property.
  • The screenwriter wants credit as co-author.

James Cameron, who created Avatar, recently called the idea of AI making up actors and performances "horrifying." He told CBS Sunday Morning (11/30): "They can make up an actor. They can make up a performance from scratch with a text prompt. That's horrifying to me."

Cameron banned AI use in Avatar: Fire and Ash entirely, even announcing the film will open with a title card stating "no generative AI was used in the making of this movie." His stance? "We honour and celebrate actors. We don't replace actors." Yet even Cameron, who sits on the board of Stability AI (yes, really), acknowledges the existential threat AI poses to creative industries.

y8ggjmfo579xq2o.webp

If a filmmaker uses AI to script the next billion-dollar franchise, who deserves credit and compensation? The prompter who guided the story? The AI company whose tool generated it? The thousands of screenwriters whose scripts trained the model without permission? What if that script closely resembles existing copyrighted works because that's what the AI learnt from?

Who's right? Nobody knows yet.

Culpability: When AI Misbehaves, Who’s to Blame?

blaming the ai

Here's where it gets properly spicy.

If an AI generates defamatory content, a harmful deepfake, or accidentally plagiarises someone's work, who's responsible?

Current trends point to "blame the prompter."
Well, you asked for it, you own the consequences.

But platforms and AI developers aren't escaping scrutiny either. The EU AI Act, which will kick in properly in August 2026, now requires AI-generated content like deepfakes to be clearly labelled. Companies must document what training data they used.

It's like being handed car keys with a note saying "if you crash, it's on you."
Except the car occasionally decides to take its own route and nobody's quite sure why.

Real Cases Reshaping the Rules

Let's look at specific examples where these battles are playing out:

New York Times v. OpenAI & Microsoft (Ongoing): The Times alleges OpenAI used millions of articles without consent, creating economic harm by pulling users away from paywalled content. They're seeking billions in damages. The case consolidated with other news organisations in 2024. This could set precedent for whether AI-trained on journalism owes media companies compensation.

Meta Wins Fair Use (June 2025): Meta successfully defended against 13 authors who sued over LLaMA training on their novels. The judge ruled for Meta because the authors failed to prove market impact. However, the judge noted the ruling only applies to these specific works, and future cases could go differently with stronger evidence.

Thomson Reuters vs. ROSS Intelligence (February 2025): A Delaware court ruled that ROSS using Thomson Reuters' Westlaw headnotes to train a competing AI legal research tool wasn't fair use. This was the first major decision rejecting fair use as a blanket defence for AI training. The ruling: using copyrighted materials for AI training can constitute infringement, especially when creating market competition.

OpenAI in Germany (November 2024): GEMA sued OpenAI for scraping copyrighted lyrics. OpenAI lost and was ordered to pay damages. GEMA called it "the first landmark AI ruling in Europe," establishing that AI companies must comply with copyright law even for training data.

Andersen vs. Stability AI (October 2023): Visual artists sued Stability AI, Midjourney, and DeviantArt for using billions of scraped images to train Stable Diffusion without permission. In August 2024, a judge upheld copyright and trademark claims, allowing the case to proceed. The outcome could establish whether image generators owe compensation to artists whose work trained the models.

See the pattern? AI companies are winning some fair use arguments, but losing when they use pirated materials or create obvious market substitutes.

Settlements are happening, often for substantial sums. The law is crystallising, but slowly.

What's Coming Next

Photo by Tingey Injury Law Firm on Unsplash

Laws are evolving fast, and here's what I expect:

More hybrid rights frameworks.
We'll see legal structures that split rights between prompters, AI companies, and training data sources. Anthropic's $1.5 billion settlement hints at this future.

Mandatory disclosure.
The Generative AI Copyright Disclosure Act or something similar will likely pass. AI companies will have to reveal training datasets. This gives copyright holders information needed to sue if their work was used without permission.

Licensing becoming standard.
Rather than fight endless lawsuits, AI companies will increasingly licence training data. Universal Music's partnership with Udio shows this shift. Expect more deals where content creators get paid upfront for training rights.

Platform accountability.
Just as social media platforms face pressure over harmful content, AI companies will face stricter requirements to prevent misuse. The suicide lawsuits against OpenAI signal this trend.

EU leads, others follow.
The EU AI Act already pushes transparency requirements. Other jurisdictions will likely adopt similar frameworks. Expect global standards to emerge from regulatory pressure, not voluntary compliance.

Advice for Anyone Creating With AI

If you're using AI to create content, especially commercially or anything that might go viral, here's what you should actually do:

  1. Document your human input extensively.
    Save your prompts, iterations, curation decisions, and creative choices. If copyright disputes arise, you'll need evidence of substantial human authorship. The more documentation, the stronger your claim.
  2. Use licensed tools and datasets.
    Some AI platforms now offer models trained only on licensed data. They cost more, but they're legally safer. If you're creating commercially, the extra cost is worth avoiding lawsuits.
  3. Disclose AI use clearly.
    Whether legally required or not, transparency builds trust and reduces liability. If something goes viral and people later discover it was AI-generated without disclosure, backlash is severe.
  4. Understand your liability.
    You're responsible for what you create and share, even if AI generated it. If you prompt an AI to create something harmful, defamatory, or that infringes copyrights, you can be held liable.
  5. Don't assume fair use protects you.
    Fair use is a defence, not a right. It's decided case-by-case. If you're using AI to create commercial content that competes with or replaces copyrighted works, fair use probably won't protect you.
  6. Watch for original data owners.
    If your viral AI content used training data from identifiable creators (musicians, artists, writers), they might sue. This is especially true if your content is commercial or directly competes with their work.

The Uncomfortable Truth

Traditional copyright assumed human creators, original works, and clear ownership. AI breaks all three assumptions simultaneously.

We're in a transition period where old laws don't quite fit, and new laws haven't been written yet.

Courts are making decisions that sometimes contradict each other. AI companies are settling some cases and fighting others.

Content creators are both excited about new tools and terrified of having their work stolen.

The resolution won't come quickly. Copyright law evolves slowly, often taking years for precedent to solidify.

In the meantime, billions of dollars, thousands of creative careers, and fundamental questions about authorship and ownership hang in limbo.

So, stay informed. Credit your sources—human and machine.
And maybe, just maybe, push for rules that protect creators while letting innovation flourish.

Because right now, we're all writing the rulebook together. Might as well make it a good one.

More articles: