The Smarter Your AI, The Dumber You Get

Anthropic’s research shows AI fluency has a dangerous blind spot

March 5, 20265 min read
The Smarter Your AI, The Dumber You Get

A beautifully formatted document has a strange power over us. It signals effort. It signals competence. It signals: someone checked this.

When AI generates that document in four seconds, none of that is true. But our brains don’t get the memo.

This is not a philosophical worry about the future. Anthropic just published data on it.

Photo by Aerps.com on Unsplash
Photo by Aerps.com on Unsplash

The Finding You Should Not Scroll Past

On 23rd February 2026, Anthropic released its AI Fluency Index: an analysis of nearly 10,000 anonymised Claude conversations from January 2026. The research tracked 11 observable user behaviours, things like questioning the model’s reasoning, checking facts, and identifying missing context.

The headline finding was encouraging. About 85.7% of conversations showed gradual refinement. Users were iterating, pushing back, building on outputs. That looks like healthy collaboration.

Then came the finding that should make everyone stop.

In conversations where Claude produced an artifact — code, a document, an interactive tool — users became markedly less critical.

They were 5.2 percentage points less likely to flag missing context. They were 3.7 percentage points less likely to check facts. They were 3.1 percentage points less likely to ask the model to explain its reasoning.

The output looked polished. So people stopped asking questions.

The Politeness of A Finished Thing

Photo by Priscilla Du Preez 🇨🇦 on Unsplash
Photo by Priscilla Du Preez 🇨🇦 on Unsplash

There is a deep human instinct at work here. When something looks complete, interrogating it feels almost rude.

We do this with people, too. A colleague who hands you a thick, well-bound report gets fewer probing questions than one who scrawls notes on a whiteboard. The finish signals authority. The authority suppresses scrutiny.

AI has learned to finish things very, very well.

It formats. It headings. It bullet-points. It uses confident, declarative sentences. It produces documents that look like the output of a careful expert who has done the research, checked the sources, and organised everything for your convenience.

It hasn’t. Not necessarily.

But it looks like it has, and for most human brains, looking is enough.

The Smarter the AI, the Bigger the Blind Spot

Photo by Solen Feyissa on Unsplash
Photo by Solen Feyissa on Unsplash

Here is where it gets interesting, and a little uncomfortable.

As AI models improve, their outputs get more polished. The prose gets cleaner. The code compiles first try. The documents read more fluently. The hallucinations, when they occur, look increasingly indistinguishable from well-sourced facts.

In other words:

The better AI gets at producing finished-looking things, the more our critical instincts disengage.

This is the core of what I’d call the Competence Trap. We trust outputs that look authoritative. The more competent AI becomes at generating the appearance of authority, the less we question it.

The less we question it, the more errors, gaps, and confabulations pass straight through our judgement without a scratch.

The research backs this up from the opposite direction too. Users who iterated on their prompts, who pushed and refined and challenged the model, questioned Claude’s reasoning 5.6 times more often and spotted missing context four times more frequently than those who just accepted the first result.

The behaviour that protects you is active friction. The friction disappears when the output looks done.

This Is Not Laziness. It Is Cognition.

Photo by Jason Strull on Unsplash
Photo by Jason Strull on Unsplash

It is worth being precise about what is actually happening here, because blaming individuals for being “lazy” or “uncritical” misses the point.

What the Anthropic data captures is a cognitive pattern, not a character flaw.

Automation bias is well-documented: when a system presents confident, coherent output, humans defer to it. This predates AI. It happens with GPS navigation, autopilot warnings, spell-check. We trust systems that project confidence.

What is new is the scale of the confidence AI projects, and the range of domains it projects it across.

A GPS can confidently lead you down a closed road. An AI can confidently give you incorrect medical dosages, wrong legal precedents, fabricated statistics, and code with critical security vulnerabilities, all in the same polished document, all with the same unruffled tone.

The capability has scaled. The critique hasn’t kept up.

The Hypothesis, Stated Plainly

So here is the uncomfortable conclusion:

The smarter AI gets, the less we think.

Not because we are stupid. Because we are human. Because polished things invite trust, and trust invites passivity, and passivity in the presence of a confidently incorrect AI is how errors get embedded into decisions, strategies, code, and documents that ripple through organisations and lives.

The AI Fluency Index was designed to measure whether people are developing the skills to use AI well.

What it found, at least in this slice of data, is evidence of a gap between the sophistication of the tool and the critical engagement of the user, a gap that widens when the tool performs best.

That should land as a wake-up call, not a reason to slow down AI adoption, but a reason to be deliberate about how we engage with it.

What To Do With This

Photo by Rob Coates on Unsplash
Photo by Rob Coates on Unsplash

The Anthropic data offers a clue in its own findings. Users who iterated and refined, who treated the model as a collaborator to interrogate rather than an oracle to accept, were far more likely to catch what was wrong. The behaviour that produces better results is also the behaviour that keeps your thinking sharp.

A few things worth practising:

Ask before you accept. When an AI produces a polished artifact, that is exactly the moment to slow down, not speed up. The polish is not evidence of accuracy.

Name the gaps. Before you use any AI output, spend sixty seconds asking what it might have missed, mischaracterised, or fabricated. Make it a habit, not a reaction.

Demand the reasoning. Ask the model to explain how it got there. Not because the explanation proves accuracy, but because your engagement with the explanation keeps your own thinking active.

The goal is not to distrust AI. The goal is to remain the one doing the thinking.

Because here is the thing: AI doesn’t get dumber when you stop checking. You do.