You Sound Like ChatGPT Now
AI isn't just changing how we write. It's rewriting how we think, speak, and connect with each other.

That's how much more frequently academics started using the word "delve" after ChatGPT launched. Not in their writing, but in their speech. On camera. In lectures. In presentations they'd given the same way for years.
"Meticulous." "Adept." "Nuanced." "Realm."
These ChatGPT-favoured words surged across nearly 280,000 YouTube videos from academic institutions. Meanwhile, words the AI uses less, like "bolster," "unearth," and "nuance" as a verb, quietly faded from our vocabulary.
The Max Planck Institute for Human Development documented this. Researchers called it the "seep-in effect."
But let's call it what it really is: reverse training.
We are Optimising Ourselves
We like to think we're in control of our tools. We pick them up, use them, put them down. They serve us. We're the ones giving commands.
But here's the uncomfortable truth:
Every time you rephrase a thought so your AI transcriber catches it cleanly, you've been trained.
Every time you announce "I have three points" before speaking, you've been trained.
Every time you kill a joke because the meeting summariser won't get it, you've been trained.
We've spent years asking how to make AI align with human values. The researchers at Max Planck suggest we've had it backwards: "While intensive research focuses on machines' alignment with human behaviour, our study suggests that the reverse may also be occurring." (Lopez et al., 2024).
We are aligning ourselves with the machine.
And we're doing it voluntarily, unconsciously, one optimised sentence at a time.
How Language Flattening Works
This isn't just about vocabulary. It's about the systematic compression of human expression into machine-readable formats.
UNESCO warns that language flattening is a real phenomenon, where AI-generated content strips away linguistic richness, regional accents, and cultural variations.
When we optimise our speech for machines, we don't just change our words. We change what we're able to express.
Let me break down how this happens:
1. The Search Engine Effect

Remember when you learned to search Google? You didn't type "What's the weather like in Sydney today?" You typed "weather London." Two words. No articles. No pleasantries. Pure information retrieval.
You trained yourself to speak Google's language.
Now the same thing is happening with AI tools. You've learned to speak in clear declarations. To announce your intent before expressing it. To avoid the tangential thinking that makes meetings human.
"I want to flag a risk" has replaced "I'm a bit worried about something, and I'm not sure if it's a real issue, but..."
The second version is how people actually think. The first is what an AI transcriber tool captures cleanly.
2. The Loss of Nuance

When you speak for the machine's benefit, you pay a tax in texture.
Sarcasm becomes risky. The AI might log your joke as a serious statement. ("Meeting notes: Tathagat suggested we abandon the project and go home.")
Cultural idioms become friction. Your "she'll be right" doesn't translate for a globally-trained model.
Emotional subtext becomes invisible. The sentiment analysis sees words, not the pause before them.
Karelia Vázquez (2025) put it starkly: "Robotic verbiage erases vulnerability, humour, and everything that makes us human."
What we're left with is communication that's efficient, clear, and utterly sanitised. It's language optimised for machine understanding, not human connection.
3. The Standardisation Trap

A Cornell study found that Indian English speakers are shifting toward American English patterns when using AI tools. Regional dialects, the linguistic fingerprints of culture and community, are being smoothed away.
This isn't accidental. It's architectural. AI models are trained predominantly on Standard American English. They don't just prefer it. They actively struggle with alternatives.
The Verge documented cases of ChatGPT repeating non-standard English prompts back to users with exaggerated, almost mocking versions of their dialect. One Singaporean user described the AI's response as "super exaggerated Singlish" that was "slightly cringeworthy."
The message is clear: speak Standard American English, or be misunderstood.
So people adapt. And the diversity of human expression contracts.
4. The Prompt Engineering of Daily Life

Nobody warned us about this, but we're all becoming prompt engineers, not just for ChatGPT or other LLMs. We're prompt engineering our own thoughts.
You're in a meeting. You have a complex idea with multiple layers, some uncertainty, and a few contradictory elements. In a purely human conversation, you'd think out loud. You'd explore the idea verbally, circle back, and refine as you go.
But now there's an AI meeting notetaker around you. And you know from experience that if you think out loud, the summary will be a mess. The AI will pick up your half-formed thoughts as if they were conclusions. Your verbal processing will look like indecision in the notes.
So you don't think out loud anymore. You formulate the idea internally first. Then you deliver it clearly, as if reading from a script you just wrote in your head.
You've optimised yourself. But you've also killed the collaborative thinking process that makes meetings valuable.
5. The Empathy Erosion

Perhaps most troubling is what happens to emotional intelligence in this new paradigm.
Research from Cornell University shows that communications suspected of using AI assistance are judged as less cooperative and less affiliative by recipients.
But here's the paradox: to make AI tools work well, we communicate in ways that trigger exactly this perception.
When you speak in clear, structured sentences with proper grammar and no emotional variance, you sound professional. You also sound like you might be using AI. And increasingly, that makes you sound less trustworthy, less collaborative, less human.
We're trapped in a bizarre loop: optimising our communication for machines makes us seem more like machines to other humans.
The Cultural Implications
This isn't just about individual behaviour. It's about culture-wide linguistic shift.
The proliferation of AI-generated text is staggering, with vast amounts of content produced daily by algorithms and fed back into systems as training data.
This creates a recursive loop where AI learns from AI-influenced human speech, which then influences more humans, who produce more AI-influenced speech.
The impact of language models on language flattening may be inversely related to the number of native speakers, meaning smaller languages and cultural groups face even greater pressure to conform to AI-optimised communication patterns.
Think about what we're losing:
- Regional varieties that carry centuries of cultural identity
- Informal registers that build social bonds
- Creative language play that drives linguistic innovation
- Ambiguity and poetry that make communication interesting
- The mess that makes us human
In exchange, we get efficiency. Clarity. Machine-readable communication.
Is that really a fair trade?
A Better Way Forward

I'm not suggesting we abandon AI tools. I build them myself. Our meeting agent transcribes, translates, and summarises conversations in real-time, and it genuinely makes global collaboration easier.
But there's a difference between using AI as a tool and letting AI dictate how we communicate.
Here's what I've started doing in my own meetings:
I speak naturally first, optimise never. If the AI misses something, that's feedback for improving the tool, not a signal that I should change how I talk.
I flag machine-friendly moments. When I need something captured perfectly, I'll say "for the record" and then give a clean statement. The rest of the time, I'm human.
I preserve the mess. Those tangents, half-formed thoughts, and verbal exploration? They're features, not bugs. If the AI can't handle them, we need better AI, not more robotic humans.
I check the transcript critically. When the AI summary misses nuance or context, I add it manually. Yes, it takes time. That time is worth preserving the richness of what was actually said.
The goal isn't perfect transcripts. It's human communication that happens to be transcribed.

The Script We're Writing
AI is rewriting the human script. Not through force, but through subtle pressure to optimise, clarify, and standardise.
Every time we simplify our language for a voice assistant, we train ourselves to think in simpler patterns.
Every time we avoid idioms in a transcribed meeting, we lose a bit of cultural expression.
Every time we speak in "prompts" instead of naturally, we become a little more like the machines we're trying to control.
As AI models become embedded in the architecture of modern life, the very nature of human interaction is being reprogrammed, with our conversations, political debates, and emotional lives being subtly but profoundly reshaped.
The costs of this transformation aren't measured in productivity metrics. They're measured in empathy, human connection, and the shared understanding that underpins society.
Can you see it now?
Listen to yourself in your next meeting. Notice when you're speaking for the humans in the room versus speaking for the AI in the corner. Notice when you catch yourself simplifying, clarifying, standardising.
And then ask yourself: is this the script you want to follow?
Because we're writing it, line by line, conversation by conversation. And once it's written, it'll be very hard to unwrite.
The machines are learning from us. But we're learning from them too. And that's a far more profound change than any algorithm.
Want to use AI meeting tools without letting them flatten your communication? VideoTranslatorAI transcribes, translates, and summarises in real-time whilst preserving the nuance of natural conversation. It's built on the principle that AI should adapt to humans, not the other way around. Because the best tools amplify what makes us human rather than training us to sound like machines.
