They Hyped AI Apocalypse for Funding. Got Firebombed Instead.

The predictable consequence of doomsday hype

April 14, 20266 min read
They Hyped AI Apocalypse for Funding. Got Firebombed Instead.

Around 3:40 in the morning on 10 April 2026, a 20-year-old man named Daniel Alejandro Moreno-Gama walked up to the gate of Sam Altman’s Russian Hill home in San Francisco and threw a bottle containing a burning rag at it. Security guards extinguished the fire. No one was hurt, thankfully.

Moreno-Gama fled on foot, then showed up at OpenAI’s Mission Bay headquarters an hour later, allegedly threatening to burn the building down. Police arrested him. He was charged with attempted murder, arson, and manufacture of an incendiary device.

Forty-eight hours later, Altman’s home was targeted again. This time a car stopped outside in the early hours of Sunday morning and someone in the passenger seat fired a round toward the property. Two people were arrested, Amanda Tom and Muhamad Tarik Hussein, on charges of negligent discharge of a firearm. OpenAI said it believed the second incident was unrelated to the first. Motives remain publicly unclear.

Two attacks in two days on the home of the most prominent figure in artificial intelligence. The first with an explicit, documented ideological motive. The second mysterious but landing in the same address, the same weekend, in the same city where AI backlash has been quietly building for months.

A Familiar Playbook from A Different Era

A documented ELF arson attack (Image courtesy of PBS)
A documented ELF arson attack (Image courtesy of PBS)

If you want a historical frame for what happened on Russian Hill, look back to the 1990s and early 2000s, when the Earth Liberation Front (ELF) and the Animal Liberation Front (ALF) carried out a wave of arson attacks, equipment sabotage, and property destruction across the United States.

Their targets were corporations, research laboratories, and scientists they framed as enemies of nature. Their logic was simple:

Progress itself was the threat, and the people driving it were legitimate targets.

The FBI eventually classified ELF and ALF actions as domestic terrorism. The groups argued they were defending the planet. The courts argued otherwise.

What we are witnessing now rhymes closely with that pattern. Call it techno-Luddism, or eco-terrorism updated for the AI age. A fringe has become convinced that powerful AI will erase jobs, strip away human autonomy, or accelerate humanity toward extinction. Rather than engaging that fear through debate or policy, they have identified a symbol and moved toward it with fire.

The symbol, in this case, is OpenAI’s CEO, Sam Altman.

OpenAI CEO, Sam Altman (Photo from CNBC)
OpenAI CEO, Sam Altman (Photo from CNBC)

The Loop that Nobody Wants to Talk About

Photo by Nathan Kuczmarski on Unsplash
Photo by Nathan Kuczmarski on Unsplash

Here is where uncomfortable honesty is required.

Altman himself addressed the attacks in a blog post shortly after, sharing a family photo and writing: “The fear and anxiety about AI is justified… While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally.” You can read his full statement on his blog.

It is a measured, decent response. It is also spectacularly ironic.

Because the doomsday framing that radicalised Moreno-Gama did not originate in a vacuum. It was amplified, consistently and loudly, by AI leaders themselves.

OpenAI’s own safety communications, the existential risk letters signed by prominent figures in the field, the constant public narrative of “we may be building something that ends humanity, but we must press on regardless” — these messages were broadcast at an enormous scale over many years.

The same executives who signed letters warning of civilisational catastrophe then used that anxiety to drive adoption.

Fear of job displacement drove workers to learn the tools. Fear of falling behind drove enterprises into subscription contracts. Fear of regulatory irrelevance drove governments into consultations that OpenAI and its peers were perfectly placed to shape, including the recent “New Deal for the AI Age” proposals featuring robot taxes and universal basic income-adjacent wealth funds.

The loop is not subtle:

  1. Leaders hype “AI will wipe out jobs and society.”

  2. They sell the solution (subscribe to the tool creating the disruption).

  3. A subset of the public concludes the leaders are deliberately accelerating doom.

  4. That subset radicalises.

When you flood a public information environment with doomsday framing, even if the intent is regulatory positioning or market creation, you do not get to curate who internalises it. Some people hear “this technology could end humanity” and write a Substack post. Others hear it and reach for a petrol-soaked bottle.

That is not a justification for violence. It is a causal explanation. The distinction matters.

What This Actually Costs

PauseAI official statement (Source: X)
PauseAI official statement (Source: X)

PauseAI, the organisation whose Discord server Moreno-Gama frequented, condemned the violence in a public statement. That is the correct and necessary response. Advocacy groups that want to be taken seriously must categorically reject property destruction and threats, full stop.

But condemnation alone does not close the loop.

Violence against AI leaders, even when it fails to cause harm, has a chilling effect on the entire ecosystem of discourse around the technology. Researchers who might otherwise publish critical findings become cautious. Journalists who cover AI sceptics now have to distinguish legitimate concern from potential radicalisation. Policy advocates who want slower AI development are forced to spend political capital distancing themselves from a firebomber, rather than making their case on the merits.

This is bad for everyone, most acutely for the people with genuine, evidence-based concerns about AI’s trajectory.

Labour markets are being disrupted faster than policy can adapt. Power over transformative technology is concentrated in a handful of privately held laboratories. Some of the concerns raised by AI doomers, about misalignment, about the pace of capability jumps, about the adequacy of current safety frameworks, are not baseless. They are overhyped in some quarters, but they are not invented. These are conversations civilised societies need to have, and violence makes them significantly harder to have.

The Real Target of the Firebomb

An image from the Department of Justice shows the person suspected of throwing a firebomb at OpenAI CEO Sam Altman's home in San Francisco on April 10.
An image from the Department of Justice shows the person suspected of throwing a firebomb at OpenAI CEO Sam Altman's home in San Francisco on April 10.

Here is the uncomfortable conclusion: the fear AI leaders sold to create urgency, drive adoption, and shape policy has now produced real firebombs at their doors.

That is not a coincidence. It is a consequence.

Altman’s post about wanting “fewer explosions in fewer homes, figuratively and literally” is precisely right. The problem is that the figurative explosions, the doomsday narratives, the civilisational anxiety deployed as a growth and influence strategy, came first. The literal one came second.

The solution is not silence about AI risk. The risks are real enough to warrant serious, grounded conversation. The solution is precision: distinguishing between what is genuinely concerning, what is speculative, and what is being amplified for commercial or political leverage. That distinction requires honesty that the industry has largely avoided.

Because when you teach people that the world is on fire, you cannot be surprised when someone eventually reaches for a match.


This piece connects directly to my earlier article, AI Leaders Sold You Fear, Then Sold You Subscriptions which examines how existential risk narratives became a revenue and influence strategy. The Russian Hill attacks are the most literal version yet of that argument.