Should You Still Learn to Code in 2026?
Here’s what OpenAI, NVIDIA, and Anthropic say you need to survive the age of AI

The most powerful artificial intelligence companies on Earth are hiring for soft skills. They’re advocating for humanities degrees. They’re telling people that creativity matters more than coding.
If this sounds backwards, you’re not alone. But when the architects of AI unanimously point toward human capabilities as the skills of the future, it’s worth paying attention.
In a recent OpenAI town hall, Sam Altman was asked what skills people should develop as AI transforms every industry. His answer caught many off guard.
“Become high agency,” he said. “Get good at generating ideas. Be very resilient. Be very adaptable to a rapidly changing world. These are going to matter more than any specific skill.”
Not coding. Not prompt engineering. Not technical expertise.
Agency. Ideas. Resilience. Adaptability.
This advice becomes even more striking when you realise it’s coming from multiple leaders across the AI industry. From OpenAI to NVIDIA to Anthropic to Microsoft, the founders shaping our technological future are pointing toward the same unexpected conclusion.
The Death of Raw Intelligence as Currency
For decades, we’ve organised education and careers around a simple premise: accumulate knowledge, demonstrate expertise, climb the ladder. But Altman argues this model is becoming obsolete.
“There will be a kind of ability we still really value, but it will not be raw, intellectual horsepower,” he explained in a podcast with Adam Grant.
The new premium ability? Learning how to ask great questions.
Think about that shift. In a world where AI can access and synthesise virtually all human knowledge in seconds, memorising facts or knowing where to find them loses its value.
What remains valuable is the uniquely human capacity to frame problems, to ask questions nobody else is asking, to see connections others miss.
Altman put it bluntly when reflecting on future generations: “My kid is never gonna grow up being smarter than AI. And that’ll be natural.”
The question isn’t whether AI will surpass human intelligence in specific domains. It already has. The question is what humans will do that AI cannot.
Domain Expertise Meets AI Collaboration

At the Cisco AI Summit held earlier this month, NVIDIA CEO Jensen Huang offered a complementary perspective.
The man whose company powers most of the world’s AI infrastructure thinks that success in the AI era is no longer about coding, which, he believes, has become a commodity.
“All of you have something of great value, which is domain expertise, to understand the customer, understand the problem. And that is the ultimate value,” Huang emphasised.
This reframes the entire conversation. Rather than viewing AI as something to master technically, Huang sees it as something to collaborate with.
Your unique domain knowledge, whether in medicine, law, education, agriculture or any other field, becomes more valuable when amplified by AI.
Huang also gives us a specific recommendation: treat AI like your smartest colleague.
Learn to work alongside it, to prompt it effectively, to blend your expertise with its capabilities. “It is vital that we upskill everyone, and the upskilling process, I believe, will be delightful. Surprising,” as he stated in last year’s World Government Summit.
The smartest people, according to Huang, aren’t necessarily the most technical, but those who are able to see around corners.
Foresight, intuition, strategic thinking. These human qualities become differentiators precisely because AI excels at processing what already exists, not imagining what could be.
The Humanities Revival

Perhaps the most counterintuitive advice comes from Daniela Amodei, co-founder and president of Anthropic, the company behind Claude AI.
In a world racing to build artificial general intelligence, Amodei argues that studying the humanities will be “more important than ever.”
When Anthropic hires, they actively seek soft skills: communication, ethical reasoning, the ability to understand human context and nuance.
Why? Because AI excels at processing information but struggles with human context, ethical reasoning and nuanced communication.
Technical skills can be automated or augmented. But understanding human motivation, navigating complex social dynamics, exercising moral judgment in ambiguous situations: these capabilities remain stubbornly human.
The humanities teach exactly these skills. Literature develops empathy. Philosophy sharpens ethical reasoning. History provides pattern recognition across human behaviour.
In an age of artificial intelligence, these “soft” skills may prove to be the hardest to replace.
Three Adaptation Strategies

Synthesising advice from AI’s architects, three strategies emerge in my mind:
Strategy 1: Develop meta-skills over specific skills
Specific skills become outdated. Meta-skills, the ability to learn, adapt and transfer knowledge, remain valuable regardless of what changes.
Altman’s emphasis on agency and resilience reflects this. These aren’t skills you deploy; they’re capacities that enable you to develop whatever skills become necessary.
Strategy 2: Go deep, then collaborate
Huang’s advice suggests a two-step process. First, develop genuine expertise in a domain you care about. Then, learn to collaborate with AI to amplify that expertise.
Shallow knowledge of many things may matter less than deep understanding of something specific, combined with the ability to leverage AI for everything else.
Strategy 3: Invest in irreplaceable human qualities
Communication, empathy, ethical reasoning, creativity, judgment: these capabilities remain stubbornly human. As technical tasks become automated, they become comparatively more valuable.
Amodei’s advocacy for humanities education reflects this. The “soft” skills may be the hardest to replicate, making them the safest long-term investment.
A Different Kind of Race

There’s a race underway, but it may not be the race we imagined.
It’s not humans versus machines. The founders building AI don’t describe competition; they describe collaboration.
It’s not technical skills versus soft skills. Both matter, but soft skills may have more lasting value.
The real race is against stagnation. Against the temptation to assume current skills will remain sufficient. Against the comfort of expertise that resists change.
Moving Forward
The path forward isn’t about predicting exactly which skills will matter in 2030 or 2040. Predictions that specific are impossible.
It’s about developing the capacity to adapt regardless of what emerges. Agency. Resilience. Continuous learning. Deep expertise combined with collaborative flexibility. Human capabilities that machines cannot replicate.
These aren’t things you learn from a textbook. They’re capabilities you develop by doing hard things, failing, learning, and trying again.
Practice asking great questions. Not just good questions, but the questions that reveal hidden assumptions, clarify confusion, and open new possibilities.
The founders building AI are offering a warning and an invitation. The warning: change is coming faster than most people expect. The invitation: the future favours those who develop distinctly human strengths.
The question isn’t whether you can keep up with AI.
It’s whether you can keep growing alongside it.
