We Don't Have an AI Crisis. We Have an Accountability Crisis.
A call to stop blaming AI for human decisions

I was scrolling through Twitter before heading to bed when a tweet caught my eye, and that I just can’t shake off:

Source: @RaminNasibov on X
My gut answered before my brain caught up.
Yes. It’s the people. It’s always been the people.
We’ve spent years building a boogeyman out of algorithms and neural networks. Headlines scream about AI taking jobs, AI spreading lies, AI ending humanity.
But strip away the fear and look at what’s actually happening. In every case, a human made a choice. The AI just followed instructions.
We’re prosecuting the weapon while the criminal walks free.
A Tale of Two Wizards

I always think of AI as a magic wand, a tool of immense power that amplifies whatever the wielder intends.
Now imagine there are two wizards standing before identical wands, each crackling with the same raw power.
The first wizard raises her wand and heals. She mends broken bones, restores sight to the blind, conjures shields that protect villages from storms. When plague sweeps through a distant kingdom, she creates potions that cure thousands. Her magic flows outward, touching lives she’ll never meet, solving problems she’ll never see firsthand. The wand amplifies every generous impulse in her heart.
The second wizard raises her wand and destroys. She curses strangers, sows discord between neighbours, conjures illusions that deceive the vulnerable into ruin. When she sees power, she wants more. When she sees trust, she exploits it. The same wand that healed in one set of hands now corrupts in another.
Same wand. Same magic. Different hearts.

Now replace “wand” with “artificial intelligence.” Replace “wizard” with “builder.”
The metaphor collapses into reality. Same underlying technology. Same machine learning models. Same artificial intelligence.
The AI didn’t decide which path to take. The humans did.
The Good Wizards Among Us

That grammar checker suggesting clearer phrasing for your important email? AI making your communication better.
The recommendation that led you to your favourite song last year? AI understanding something about you that even you couldn’t articulate.
The fraud detection that blocked a suspicious transaction before someone emptied your bank account? AI protecting you while you slept.
Medical imaging analysis now catches tumours that human eyes miss. Not sometimes. Regularly. Thousands of people alive today who would have received their diagnosis too late.
Agricultural AI helps farmers in developing nations optimise water usage during droughts. Children eat because machine learning analysed satellite imagery and soil samples.
Accessibility tools powered by AI let blind people “see” photos their friends share, describe scenes in videos, navigate streets independently.
This is the technology being cast as humanity’s great threat.
The Dark Wizards are Real Too

I won’t pretend the dark side doesn’t exist. It absolutely does.
Voice cloning has become terrifyingly accurate. Scammers need mere seconds of audio to replicate someone’s voice convincingly.
Parents have received ransom calls from their “kidnapped” children. The children were fine, sitting in school. But those minutes of terror were real.
Misinformation now generates itself. AI creates fake news articles, fake expert quotes, fake evidence for claims that would crumble under any scrutiny.
People believe it because it looks legitimate. Democracy suffers when truth becomes indistinguishable from fabrication.
Job displacement isn’t hypothetical anymore. Companies eliminate positions that AI can do cheaper. Workers with decades of experience find themselves competing against systems that don’t need sleep, benefits, or dignity.
And yes, weapons. Autonomous systems that could theoretically select and engage targets without human decision. The technology exists. The ethical framework for deploying it does not.
But here’s what I need you to notice: every single one of these harms required a human decision.
Someone chose to use voice cloning for scams instead of helping stroke survivors speak again.
Someone chose to generate misinformation instead of educational content.
Someone chose to eliminate jobs without transition support.
Someone chose to develop lethal autonomy instead of search-and-rescue drones.
Remove the malicious choice, and the technology becomes neutral or even beneficial.
The Question We Should Actually Ask

This brings us to the uncomfortable part.
If AI is just a tool, and tools depend on who wields them, then the problem we face isn’t technological. It’s human. It always has been.
We don’t have an AI crisis. We have an accountability crisis.
As someone who builds AI tools, I ask myself some questions before every feature or capability:
What’s the worst this could enable?
Who might misuse this?
Is the potential good worth the potential harm?
Sometimes the answer is no. And that’s the point.
The reflection isn’t about whether AI is inherently good or evil. It’s neutral. The reflection is about whether we, as builders and users, have the wisdom to wield it responsibly.
We don’t ban cars because drunk drivers kill people. We create laws, enforce consequences, and trust that most humans will drive responsibly.
We don’t prohibit kitchen knives because they can become weapons. We hold the attacker accountable, not the cutlery.
Why do we treat AI differently?
Stop Blaming the Mirror

Here’s my take.
“AI scares us because it shows us ourselves.”
When we see AI generating misinformation, we’re really seeing human appetite for lies that confirm our biases.
When we see AI enabling surveillance, we’re witnessing human hunger for control over others.
When we see AI automating jobs away, we’re confronting human systems that value profit over people.
AI didn’t create these impulses. It amplified what already existed within us.
That’s uncomfortable. It’s much easier to point at the technology and say, “That’s the villain.” But comfort doesn’t solve problems. Honesty does.
The fear isn’t actually about artificial intelligence. It’s about human nature armed with increasingly powerful tools.
Where Do We Go From Here?

So what do we do with this realisation?
First, we stop the lazy narrative. “AI is dangerous” is incomplete. “AI in malicious hands is dangerous” tells the full story. Language matters. How we frame threats shapes how we address them.
Second, we regulate the wizards, not just the wands. Laws that punish deepfake creators more harshly than the platforms hosting them. Policies that hold scammers accountable rather than just blocking their tools. Frameworks that ensure human oversight in high-stakes AI applications.
Third, we build responsibly. Those of us creating AI tools must ask the hard questions before shipping features. We must design with misuse in mind. We must sometimes choose not to build something, even when we can.
Fourth, we educate. When people understand how AI actually works, they fear it less irrationally and guard against its misuse more effectively. Demystification is protection.
And finally, we remember that this isn’t new. Every powerful technology in human history has been used for both miracles and atrocities. Fire warmed our ancestors and burned their enemies. The printing press spread enlightenment and propaganda. The internet connected humanity and enabled its worst corners.
AI is simply the latest chapter in humanity’s oldest story: powerful tools amplifying whoever holds them.
The wand has never been evil. The wand has never been good.
The wand simply is.
What matters, what has always mattered, is the heart of the wizard who picks it up.
So the next time someone asks whether we should fear AI, I’ll give them the same answer:
Fear the human who wants to harm you with it. Celebrate the human who wants to help you with it. And if you ever hold that wand yourself, choose carefully what you conjure.
The spell is yours to cast.
