AI Leaders Sold You Fear, Then Sold You Subscriptions

Explaining what's behind AI fearmongering

March 18, 20268 min read
AI Leaders Sold You Fear, Then Sold You Subscriptions

A recent NBC News poll surveyed 1,000 registered American voters and asked them to rate a list of institutions, policies, and technologies.

AI came in near the bottom: 26% positive, 46% negative. Only Democrats and Iran ranked lower.

ICE, the immigration enforcement agency that has dominated headlines for its controversial deportation raids, polled better than artificial intelligence.

Let that sink in.

Here’s the kicker: 56% of those same respondents had used an AI platform like ChatGPT or Copilot in the previous month. These are not people who have never touched the technology. They use it. They just don’t trust it, or the people behind it.

That is not a public education problem. That is a communications disaster, and the architects of that disaster are the very leaders who built the

How voters feel about political figures and topics (Source: NBC News)
How voters feel about political figures and topics (Source: NBC News)

The Fear Factory

Cast your mind back to 2022 and 2023. The dominant narrative from AI’s most prominent voices was existential dread.

Sam Altman warned Congress that AI might destroy civilisation. Geoffrey Hinton, the so-called “Godfather of AI,” resigned from Google and spent months telling journalists that the technology he helped build might end humanity. Elon Musk, who co-founded OpenAI before departing, called AI “more dangerous than nukes.”

OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, on May 16, on Capitol Hill in Washington (Photo by Patrick Semansky/AP)
OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, on May 16, on Capitol Hill in Washington (Photo by Patrick Semansky/AP)

Nvidia’s Jensen Huang has since publicly acknowledged that AI doomerism accounts for “90% of the messaging” around the industry. He called it “extremely hurtful” and said it has “done a lot of damage.” In a January 2026 interview, he expressed frustration that fear narratives were deterring people from embracing the technology.

But here is the question Huang’s frustration conveniently sidesteps: 

Where did that doomerism originate?

It did not spring up from a fearful public. It was seeded, cultivated, and amplified by the industry itself.

Fearmongering is a Business Model

There is a well-worn pattern in how high-stakes technologies get brought to market.

First, establish urgency. Make the stakes feel enormous. Create the sense that society stands at a precipice, and that only the people in this room, with this technology, understand the magnitude of the moment.

That framing works exceptionally well when you need to raise capital.

Venture capitalist Chamath Palihapitiya has been candid about how AI hype dynamics operate. The doom narrative served a purpose during the fundraising cycle: it communicated that this technology was powerful, consequential, and world-altering. Investors needed to take it seriously. Missing the AI wave was framed as an existential risk in itself.

Then the money arrived. Billions of it.

And the messaging quietly shifted.

Today, the dominant pitch from AI companies is not “we are building something that might destroy civilisation.” It is “AI is a utility, like electricity. It will make your business more efficient. Here is the pricing plan.”

Sam Altman now talks about AI integration into daily life as naturally as running water. Jensen Huang positions Nvidia’s chips as critical infrastructure. Microsoft bundles Copilot into Office 365 subscriptions as though it is no more remarkable than spell-check.

The whiplash is dizzying. And the public noticed.

When the people who spent two years warning you about AI’s civilisation-ending potential turn around and ask you to pay a monthly subscription fee for it, the reasonable response is suspicion. You do not buy electricity from someone who just told you power lines might kill everyone.

Fortune magazine put it plainly in a March 2026 commentary: “Sam Altman, Jensen Huang and the other AI kingpins only have themselves to blame for the scare rippling through the economy right now.”

The Most Popular Fearmongering Topic about AI

The job displacement angle deserves particular attention because it is the one that most directly affects how young people relate to AI right now.

The dominant story is one of replacement. AI takes jobs. Specific roles become obsolete. The future is one of mass unemployment or, at best, a painful transition period where large categories of workers are left behind.

This story is not entirely fabricated. There are real disruptions happening in specific sectors, and it would be dishonest to wave them away. But the framing is wrong in a way that matters enormously for how people respond to it.

The history of transformative technology is not primarily a story of jobs disappearing. It is a story of work changing, and of the people who adapted to those changes gaining enormous advantages over those who did not.

The introduction of spreadsheet software did not eliminate accountants. It changed what accounting looked like, made some tasks irrelevant, created new ones, and shifted the value of the profession toward judgment and analysis rather than manual calculation. The people who resisted learning spreadsheets did not save their jobs. They just fell behind.

The question for AI is not whether your current job will look the same in ten years. It probably will not, just as the accountant’s job in 1995 does not look like the accountant’s job in 1985. The question is whether you will be the person who shapes what the new version of your field looks like, or the person who discovers that shift from outside.

AI is a tool. A powerful, genuinely remarkable tool unlike most that have come before it.

But the frame of “tool” matters. Tools do not make decisions about your career. People do.

The Impact on AI Literacy

Photo by Hugh Han on Unsplash
Photo by Hugh Han on Unsplash

Here is what the fear narrative is actually costing us, and it is not the companies or the investors. They will be fine regardless.

The real casualty is the ordinary person who has decided, based on everything they have absorbed from the news cycle, that AI is either a threat to their job, a tool for misinformation, a surveillance mechanism, or some combination of all three.

That person is not developing AI literacy. They are not experimenting. They are not building the habits and the fluency that are, increasingly, becoming a foundational part of modern professional life.

Think about what it meant to be able to type with all ten fingers in the 1990s. It was not a competitive advantage in a narrow sense. It was a prerequisite. If you couldn’t type with reasonable speed and accuracy, you were simply slower at almost every white-collar task than the people who could.

The skill was not glamorous, and it was not discussed in terms of transformation or disruption. It was just something you needed to know how to do.

AI literacy is shaping up to be the same kind of thing. Not in the sci-fi sense of understanding large language model architecture or knowing how to fine-tune a model.

In the practical sense of knowing what AI tools are available, what they are genuinely good at, how to prompt them effectively, when to trust their outputs and when to verify them, and how to build habits that make your working life meaningfully more capable.

That skill is not optional. It is becoming a baseline. And every month that a person delays developing it because they have internalised a fearmongering narrative is a month of compounding disadvantage.

Start From Where You Are

Photo by DISRUPTIVO on Unsplash
Photo by DISRUPTIVO on Unsplash

If you are reading this and you have been putting off engaging with AI because the whole thing feels overwhelming, or ominous, or like something designed for people more technical than you, here is a more useful frame.

You do not need to understand how it works. You need to understand how to use it. Those are very different things. Nobody who uses a GPS needs to understand satellite triangulation. Nobody who uses a word processor needs to understand how spell-check algorithms are built. The baseline skill is not technical comprehension. It is practical fluency.

Start somewhere small and concrete. Use an AI tool for a task you already do. See what it does well and what it does badly. Build from there. The gap between where you are now and where you need to be is almost certainly smaller than the fearmongering suggests, and the compounding benefits of starting sooner rather than later are almost certainly larger.

The noise around AI is not going to get quieter. The incentives that produce it are too strong and too entrenched. But you do not have to let the noise make your decisions for you.

The tool is here. It is genuinely useful. Learning to use it well is, increasingly, just part of what it means to work in the modern world.

That is a much less exciting headline than “AI will end humanity as we know it.” But it is closer to the truth.