Everyone Feels Behind in AI. Even Andrej Karpathy.

Here’s the skill set that actually matters right now

March 31, 202611 min read
Everyone Feels Behind in AI. Even Andrej Karpathy.

There’s a specific feeling that hits when you open the internet on a Monday morning and realise the tool you spent last week learning has already been overtaken by something new.

You close the tab. You open it again. You read the announcement, feel a quiet panic, and wonder:

am I already behind?

You’re not alone in that feeling. Not even close.

In late 2025, Andrej Karpathy posted something on X that stopped a lot of people mid-scroll (including me). If you’re not familiar with his name, let me tell you this: Karpathy isn’t a casual observer of the tech world. He’s one of the founding researchers at OpenAI, former AI lead at Tesla, and one of the most respected voices in artificial intelligence. If anyone should feel on top of things, it’s him.

But here’s what he wrote:

Source: X
Source: X

That post got more than 32,000 bookmarks. Sixteen million people saw it.

The number matters. It tells you how many people read those words and thought: yes, that’s exactly it.

Why This Moment Feels Different

Photo by Guillaume Jaillet on Unsplash
Photo by Guillaume Jaillet on Unsplash

Technology has always moved fast. People have always had to adapt. So why does this particular moment feel so destabilising?

The answer has to do with how the rules changed, not just the speed.

Traditional software and knowledge work operated on a fairly reliable assumption: more effort in, more output out. If you put in three hours, you could roughly predict what three hours of work would produce.

The relationship between input and output was mostly linear, and your mental model of how to do the work remained stable for long stretches of time. You learned a skill. The skill worked. You used the skill.

But AI inverts this in ways that take a while to fully internalise.

You now start from a high-level intent, something like “build me a tool that does X” or “draft a strategy for Y,” and an AI model jumps straight to a generated result. Your job shifts from doing the work to directing the work, and then verifying whether the result is actually good.

This is a fundamentally different cognitive loop. It is faster in some ways and more disorienting in others, because the quality of what comes out depends less on your technical execution and more on how well you can articulate what you actually want, how well you can spot errors in output you did not produce yourself, and how robust your system is for catching mistakes before they matter.

On top of that, the tools themselves keep changing. A model that was state-of-the-art six weeks ago may now be outperformed by something new. A workflow that worked beautifully last month may need to be rebuilt because the underlying model behaves differently. Your mental model of what is possible, and what is reliable, starts decaying the moment you form it.

Nate B Jones, who runs the YouTube channel AI News & Strategy Daily, made a video directly responding to Karpathy’s post. In it, he put the situation plainly:

“If you haven’t played with [the latest model]… your world model is already outdated.”

That’s not meant to be scary. It’s meant to be clarifying. The anxiety you feel isn’t weakness or incompetence. It’s a rational response to a genuine phase shift.

Not a Problem of Laziness or Intelligence

Photo by Nubelson Fernandes on Unsplash
Photo by Nubelson Fernandes on Unsplash

Before getting to the skills that actually help, it is worth dwelling on why this anxiety tends to come dressed as a personal failing.

Karpathy called his own gap a “skill issue.” Boris Cherny, the guy behind Claude Code, responded to Karpathy’s post by saying he feels this way most weeks.

Source: X
Source: X

When people whose entire careers are built around AI feel perpetually behind, the idea that ordinary professionals in law, marketing, education, or healthcare should feel comfortably on top of it is simply not realistic.

The anxiety is not a sign that you are failing. It is a sign that you are paying attention. The transition happening right now is genuinely discontinuous. It is not like learning a new version of software you already know. It is more like being handed a powerful new tool without an instruction manual, and watching the tool itself change shape every few weeks while you are still reading the first chapter.

The goal is not to eliminate the feeling of being behind. That feeling is probably going to be with us for a while. The goal is to build the right kind of capabilities, the ones that remain useful even as the specific tools keep changing.

So What Do You Actually Do about It?

In this video, Nate lays out what he calls a skill tree for working effectively in this new environment. It’s worth giving him full credit here: this framework comes from his thinking, and it’s one of the more practical maps I’ve come across for navigating the current moment. The internet runs on people sharing good ideas, and this is a good one.

The skill tree has four levels.

Level 1: Conditioning (Intent, Context, Constraints)

This is the foundation. Before you can use AI tools well, you need to get good at communicating. Not in a vague, hopeful way, but with precision. What do you actually want? What context does the model need to produce something useful? What constraints should limit its output?

Think of it like giving instructions to a very capable new colleague who knows a lot but doesn’t know your situation, your standards, or your preferences. The more clearly you can articulate those things, the more reliable the output becomes.

This isn’t a technical skill. It’s a thinking skill. It applies whether you’re a lawyer drafting a brief, a marketer writing copy, or an engineer building a feature.

Level 2: Authority (Verification, Provenance, Permissions)

Once a model generates something, someone still has to decide whether it’s right.

This level is about keeping that decision-making responsibility clearly in human hands. Generation is what the AI does. Decisioning is what you do. Keeping those two things separate is how you stay accountable for outcomes, not just inputs.

Practically, this means building habits of verification: checking sources, testing outputs, creating a trail of evidence so you can explain why something was approved. It also means thinking about access and permissions. What should the AI be able to touch? What should require a human sign-off?

The less visible this level is in your workflow, the more exposed you are when something goes wrong.

Level 3: Workflows (Pipelines, Failure Modes, Observability)

Individual prompts are the beginning. At some point, the work gets complex enough that you need systems, not just conversations.

This level is about designing those systems deliberately. Breaking work into stages. Defining what success looks like at each checkpoint. Anticipating what can go wrong (not if, but when) and building in the ability to detect and recover from those failures.

Observability is the technical word for this, but the concept is simple: can you see what’s happening inside your process? If something breaks, can you tell where it broke and why? The bigger the system, the more important this becomes.

Level 4: Compounding (Evals, Feedback Loops, Governance)

The fourth level is where people start to separate. Most people use AI tools in a way that resets every time. They get a result, move on, and start again from scratch next time.

Compounding means building in ways for your process to improve over time. Evaluation frameworks that tell you whether outcomes are getting better or worse. Feedback loops that route that information back into your workflow. Version control and governance so you can understand what changed when results shift.

This is how leverage grows instead of decays. It’s also the level that most people skip because it requires up-front investment with delayed payoff.

Who Does This Apply To?

Photo by Studio Republic on Unsplash
Photo by Studio Republic on Unsplash

The skill tree is universal in principle, but what each level looks like in practice varies enormously by domain. It is worth thinking through some of those differences, because the transition is genuinely smoother in some fields than others.

Software engineering has the most direct feedback loops. Code either runs or it does not. Tests pass or they fail. This makes Level 2, the verification and authority level, relatively tractable: you can automate checks that catch AI errors before they ship. The transition has still been disorienting, because the nature of the programmer’s contribution has changed, but the field has built-in error detection that other domains lack.

Marketing and creative fields have a complex relationship with Level 1. The conditioning skill, learning to articulate what you actually want, maps directly onto skills that strong creatives already have. Good creative directors have always needed to brief people precisely. The challenge is that AI output can look polished enough to feel finished when it is not, and the markers of quality are more subjective, which makes Level 2 harder. You cannot run a unit test on a brand voice.

Law faces a specific version of the authority problem. AI can produce legally plausible text at impressive speed. But legal work depends on precision, jurisdiction-specific accuracy, and professional accountability that cannot be delegated to a system that sometimes confidently produces errors. Level 2 and Level 3 are not optional in this domain. They are existential. The lawyers who will thrive are the ones who use AI to accelerate research and drafting while maintaining rigorous verification and clear accountability trails. The ones who use AI as a shortcut without the verification layer are accumulating risk that will eventually surface.

Medicine is similar but the stakes are even higher and the feedback loops are slower. AI can summarise research, support diagnosis, draft patient communications, and flag patterns in data. The error consequences in clinical settings mean that Level 2 is not just professionally important but ethically non-negotiable. The interesting opportunity is at Level 4: medicine has excellent traditions around evaluation and evidence-based practice that translate well into building the kinds of AI governance frameworks that compound over time.

Education may see one of the more interesting transitions. Teaching has always been about meeting people where they are, adapting explanations to context, and building on what someone already understands. These are deeply conditioning-level skills. Educators who embrace AI as a way to personalise learning materials, generate examples tailored to specific students, and create feedback loops around what is working may find the transition more natural than in fields that are more execution-heavy.

The domains where the transition is most disruptive are generally those where the value was historically created by effort and volume, things like producing large quantities of copy, conducting routine data analysis, or generating standard documentation, rather than by judgement, accountability, or the kind of trust that comes from professional relationships. Those domains are seeing the most direct compression of what humans need to do, which is uncomfortable but also clarifying.

What to Do With the Anxiety

Generated by Canva AI

Here is the most useful reframe I have found for the feeling of being perpetually behind.

Being behind on specific tools is fine. Being behind on the underlying skills is not. The difference matters because tools change weekly and skills compound over years.

If you are spending most of your anxiety energy trying to keep up with every new model release and every new feature announcement, you are running on a treadmill that will never stop.

If instead you are putting that energy into getting better at conditioning, at building verification habits, at designing workflows and evaluation frameworks, you are building something that will serve you regardless of which specific tool is current next month.

In his reply to Boris Cherny’s tweet, Karpathy wrote an optimistic note. He described the feeling of occasionally holding the tool at just the right angle and watching a laser beam solve the problem instantly. That experience, he said, makes the effort worth it.

Source: X
Source: X

The people who are going to have that experience most reliably are not the ones who have memorised the most settings. They are the ones who have built the deepest understanding of how to work with AI systems effectively, at a level that transfers as the systems change.

The anxiety is real. The pace is real. But the skills that matter are learnable, and the people who start building them now will compound their advantage over everyone who waits until things feel more settled.

They will not feel more settled. The time to start is now.


Nate B Jones’ video “Why Andrej Karpathy Feels ‘Behind’ (And What It Means for Your Career)” is published on his YouTube channel AI News & Strategy Daily. The skill tree framework and the quote used in this article are his work, shared here because good ideas deserve wider reach.