This AI Should Never Have Been Built
We crossed a line when faces became interchangeable.

Open X (Twitter) right now and watch people become other people.
Not through makeup or costume. But through AI that maps their movements onto someone else's face with unsettling precision.
The technology tracks every micro-expression, every head tilt, every blink. The result moves like a human because it is a human, just wearing digital skin that doesn't belong to them.

Leonardo DiCaprio, Sydney Sweeney, your favourite actor, your least favourite politician, your ex. Anyone whose face exists in a training dataset.
The demos are technically impressive. The implications are horrifying.
And I say this as someone who builds AI for a living.
The Question We Stopped Asking

I run a company that develops AI-powered meeting tools. Real-time transcription. Live translation across languages. Intelligent summarisation. Every feature we ship starts with the same question: what problem does this solve?
It's not a complicated question. But somewhere in the race to build the most impressive thing, a lot of AI developers stopped asking it.
Now, let me ask you a question.
What problem does face-mimicking AI solve? I dare you to name one legitimate application.
Entertainment? Perhaps. But we already have professional CGI, deepfakes with consent, and actors who can, you know, act.
Marketing? Sure, until Leonardo DiCaprio's lawyers come knocking because you used his face to sell protein powder without permission.
Education? I'm struggling to imagine the scenario where transforming yourself into Scarlett Johansson helps students learn algebra.
Creative expression? There's an argument here, but it falls apart the moment you realise most of these tools are being used to impersonate real, living people who never consented.
I've spent fifteen minutes on this exercise. Then an hour. Then I asked my team. We came up empty.
The charitable interpretation is that this technology is a solution searching for a problem.
The uncharitable, and I suspect more accurate, interpretation is that the creators know exactly what it's for. They're just not saying it out loud
What It's Actually Being Used For
Let me tell you what I'm seeing on my timeline.
People turning themselves into celebrities for clout.

People creating "erotic" content using the faces and bodies of figures who never agreed to this.

People testing just how convincing they can make a fake video of a real person.
This isn't hypothetical harm. It's happening right now, in public, with thousands of likes and shares.
The erotic applications alone should give us pause. We've spent years grappling with non-consensual intimate imagery.
We've passed laws. We've built detection tools. We've had painful public conversations about consent and dignity.
And now we're handing people a tool that makes all of that exponentially easier to produce and harder to detect.
If you're building technology and you can't articulate a beneficial use case, but you can immediately see the harmful ones, that's not a feature gap. That's a warning sign.
My Three Core Concerns
With this face-swapping technology, I see three glaring issues:
1. Blatant Copyright Violations
Using AI to recreate a celebrity's likeness without permission is theft of intellectual property, plain and simple.
Imagine if someone used a digital version of Sydney Sweeney to advertise their dodgy cryptocurrency scheme. Sweeney didn't consent. She’s not being paid. Her reputation is being exploited without his knowledge or permission.
A similar case actually happened. A woman in France lost $850,000 after scammers used AI-generated images of Brad Pitt to convince her she was helping the actor.
Actor Bryan Cranston raised concerns after Sora users created deepfakes featuring his likeness without consent or compensation, prompting families of Robin Williams, George Carlin, and Martin Luther King Jr. to also complain to OpenAI.
These aren't just famous people being precious about their image. They're professionals whose likeness has commercial value, and that value is being stolen by anyone with access to deepfake technology.
2. The Primary Use Is Fraud
Strip away the marketing speak and ask yourself: what is this technology actually being used for?
The answer is scams and manipulation.
Since 2017, fraud has accounted for 31% of all deepfake incidents. Deepfake fraud attempts spiked by 3,000% in 2023, with losses in North America exceeding $200 million in the first quarter of 2025 alone.
The ability to create convincing videos of famous people saying or doing things they never said or did isn't a feature. It's a weapon. And overwhelmingly, that weapon is being used to harm, deceive, and exploit.
3. The Pornography Problem

Let's address another elephant in the room: the overwhelming majority of deepfake content is pornographic.
The number of deepfake pornography videos produced in 2023 was 464% higher than in 2022, with almost 4,000 female celebrities found across the top deepfake porn websites.
Analysis shows that 94% of those featured in deepfake pornography videos are affiliated with the entertainment sector, including singers, actresses, social media influencers, models, and athletes.
When I see people on X casually sharing AI-generated videos of celebrities in suggestive poses or erotic situations, I'm witnessing the normalisation of something deeply unethical: the creation of sexual content featuring real people without their consent.
This isn't parody. It isn't satire. It's digital sexual violence.
If the primary application of your technology is creating non-consensual pornography, you've built something that shouldn't exist.
The "Just Don't Look" Fallacy

Some will argue that if you don't like deepfake content, just don't engage with it. Scroll past. Don't create it yourself.
But that ignores the victims. The celebrities whose likenesses are stolen. The ordinary people whose faces end up in pornography they never consented to. The elderly people were scammed out of their life savings by deepfake videos of trusted figures.
Your choice not to create or view deepfakes doesn't protect anyone. It just means you're not personally contributing to the harm.
What Responsible AI Development Looks Like
I'm not calling for bans. I'm not suggesting we halt all AI progress. That would be hypocritical given my work, and counterproductive given the genuine benefits AI can deliver.
But I am calling for something that seems to be in short supply: honesty.
If you're building AI tools, be honest about the likely use cases. Not the best-case scenarios in your pitch deck, but the actual ways people will use your technology in the wild.
If you're deploying AI tools, be honest about whether you've thought through the second and third-order effects. Not just "this is cool," but "this is cool and here's why the benefits outweigh the predictable harms."
And if you're a user experimenting with this technology, be honest about what you're actually doing. Turning yourself into a celebrity for a laugh might seem harmless.
But you're training yourself and the algorithm that this is normal. That this is acceptable. That consent doesn't matter as long as the technology is fun.
The Line I'm Drawing

I work in AI. I'll continue working in AI. I believe it's one of the most important technologies of our generation.
But not all AI is equal. Not all applications are beneficial. And not every capability that can be built should be built.
Face-mimicking motion AI, as currently deployed, fails every test I can construct for beneficial technology.
It lacks clear positive use cases. It enables obvious harms. It undermines consent, intellectual property, and our shared ability to trust what we see.
I'm not against AI. I'm against this AI. And I think more people in my industry should have the honesty to say the same.
