Elon Musk Says Trust Grok. It Mistook Breast for Brain
Why his medical advice is dangerously irresponsible

On February 18th 2026, Elon Musk tweeted something that should alarm anyone who understands both AI’s capabilities and its limitations.
“You can just take a picture of your medical data or upload the file to get a second opinion from Grok.”

Within an hour, the post drew nearly 500,000 views. For me, it was quite shocking that the comments were full of people agreeing, sharing their own experiences using Grok for medical advice, and some even claiming today’s doctors are doing worse than an AI chatbot.
This isn’t the first time Musk has made this recommendation. He’s been encouraging people to upload X-rays, MRIs, and blood tests to Grok since October 2024.
In a June video that resurfaced in January 2026, he claimed: “I have seen cases where it’s actually better than what doctors tell you.”
Here’s the problem: Elon Musk has over 200 million followers. He owns the platform. He owns the AI. And he has cultivated a following that treats his statements not as opinions but as signals of where the world is heading.
When he tells people an AI chatbot can replace medical judgment, people listen. And that’s dangerous.
The Track Record Nobody’s Talking About
Let’s examine what happens when people actually use Grok or any AI chatbot for medical advice.
Doctors who tested Grok following Musk’s 2024 invitation reported troubling failures. The AI failed to identify a “textbook case” of tuberculosis. It misidentified a broken clavicle as a dislocated shoulder. In one widely reported incident, Grok mistook a breast MRI as brain MRI..
Read that again. A breast scan. Mistaken for a brain.
These aren’t minor errors. They’re fundamental failures in pattern recognition that any first-year medical student would catch. Yet Musk continues promoting Grok as capable of providing medical diagnoses that are “actually better than what doctors tell you.”
A May 2025 peer-reviewed study in the journal Diagnostics assessed ChatGPT-4o, Grok, and Google’s Gemini against 35,711 brain MRI slices. Whilst Grok performed strongest of the three in identifying pathologies, researchers noted all models showed significant limitations.
Strongest of three doesn’t mean reliable enough for medical decisions.
It means less terrible than the other options.
The Privacy Nightmare
Beyond accuracy issues, there’s a privacy catastrophe waiting to happen.
When you upload medical data to Grok, where does it go? X’s privacy policy states they don’t “sell your personal information,” but depending on your settings, they may share information “to certain third parties with information to help us offer or operate our products and services.”
That’s deliberately vague language. What third parties? For what purposes? Your medical data is some of the most sensitive information you possess. Uploading it to a social media platform’s AI tool means trusting that platform with information protected by medical privacy laws in clinical settings.
As Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania, told the New York Times: “As an individual user, would I feel comfortable contributing health data? Absolutely not.”
Bradley Malin, professor of biomedical informatics at Vanderbilt University, added: “This is very personal information, and you don’t exactly know what Grok is going to do with it.”
In 2024, X users were automatically opted in to having their data used to train Grok. You had to manually opt out. Now Musk is asking people to voluntarily upload medical records to that same system.
Ireland’s Data Protection Commission launched a formal inquiry into X on 17 February 2026, examining whether X complied with GDPR obligations following reports that users had been prompted to generate non-consensual sexualised images, including of children, using Grok. This is the same platform Musk is encouraging people to trust with medical data.
Why People Are Falling for This
The comments on Musk’s posts reveal why this message resonates. Some celebrate not needing specialists anymore. Others claim Grok helped them with post-surgery care or infected wounds.
There’s truth in that sentiment. Healthcare access is a genuine problem. Many people can’t afford specialists. A primary care consultation in the US costs $100 to $200. Grok access costs $16 monthly as part of X Premium.
The cost difference is real. The accessibility gap is real.
But the solution isn’t replacing medical expertise with an AI that mistakes breasts for a brain.
Where AI Actually Helps Healthcare
I’m not anti-AI in healthcare. I build AI tools. I understand the technology’s genuine value.
AI excels at specific, well-defined tasks in clinical settings:
Analysing thousands of medical images to identify patterns humans might miss
Processing vast amounts of research literature to suggest treatment options
Monitoring patient vital signs for early warning indicators
Assisting radiologists by flagging areas that need closer examination
Helping researchers discover drug candidates faster
Notice what all these have in common? They’re tools used by medical professionals, not replacements for medical judgment.
When Google’s DeepMind developed AI for detecting diabetic retinopathy, it was deployed in clinical settings with doctor oversight. When AI helps identify potential cancers in mammograms, radiologists review the results. When AI suggests drug candidates, researchers test them rigorously.
These AI systems are trained on millions of verified medical images with expert validation. They’re tested extensively before clinical deployment. They’re used alongside human expertise, not instead of it.
Grok isn’t that. Grok is a general-purpose chatbot being crowd-tested on X users who upload their medical data for analysis.
The quality control is “let us know where Grok gets it right or needs work,” as Musk tweeted in October 2024.

You’re the experiment. Your health is the test case.
What Actually Happens With Bad Diagnoses
When Grok misses tuberculosis or misidentifies a fracture, real consequences follow:
A person with tuberculosis goes untreated, potentially spreading a serious infectious disease.
Someone with a broken clavicle receives wrong treatment recommendations, leading to improper healing or complications.
Someone told they have a dislocated shoulder when they have a fracture might attempt movements that worsen the injury.
Medical errors cost lives. According to Johns Hopkins research, medical errors are the third leading cause of death in the United States, contributing to over 250,000 deaths annually. That’s with trained professionals making decisions.
Now imagine adding untrained AI making recommendations based on uploaded photos, interpreted by patients without medical training, who then make treatment decisions without professional guidance.
The scale of potential harm is staggering.
The Responsibility Musk Isn’t Taking
When someone with 200 million followers tells people to trust an AI for medical advice, they bear responsibility for the consequences.
Musk claims he’s seen cases where Grok is “actually better than what doctors tell you.” Even if that’s true in isolated cases, it’s irresponsible to generalise from anecdotes to public health advice.
Medicine is complex. Diagnosis requires understanding symptoms in context, patient history, environmental factors, and subtle clinical signs that don’t show up in a single scan or blood test. Doctors spend years learning to interpret these factors together.
An AI trained on images can identify patterns. It can’t understand context. It doesn’t know that your back pain is related to a car accident three months ago. It doesn’t know you’re taking medications that affect blood test results. It doesn’t know your family history of heart disease.
Context matters in medicine. AI doesn’t have it.
What People Should Actually Do
If you’re concerned about medical costs or want a second opinion, here are better options:
Seek second opinions from actual doctors. Many healthcare systems offer second opinion services. Telemedicine has made this more accessible and affordable.
Use patient advocacy services. Many hospitals and insurance companies provide patient advocates who can help you understand diagnoses and explore options.
Research your condition through reputable sources. Medical organisations like Mayo Clinic, Johns Hopkins, and NHS provide reliable patient information online.
Ask your doctor to explain. If you don’t understand a diagnosis, ask for clarification. Good doctors will take time to explain in terms you understand.
Use AI as a conversation starter, not a decision maker. If you want to understand medical terminology, fine. But take that information to a real doctor for interpretation and context.
What you shouldn’t do is upload medical data to a social media platform’s AI chatbot and base healthcare decisions on its output.
The Line We Shouldn’t Cross
**AI in healthcare is not the problem. AI as healthcare is the problem.
**Let this sentence sink in.
If you read Musk’s tweet and thought “that sounds reasonable,” I am not here to insult your intelligence. The idea of a second opinion is appealing. Healthcare systems are imperfect. Appointments are expensive. Wait times are brutal. The frustration is real.
But the answer to a flawed system is not to bypass it entirely with an unregulated, commercially motivated chatbot that hallucinates answers one out of every five times.
The answer is to demand better access, better funding, and better systems, while using AI to support the professionals who dedicate their lives to keeping us alive.
Musk’s recommendation is not innovation. It is recklessness dressed up as progress. And when someone with his reach says it, the stakes are not theoretical. They are measured in missed diagnoses, delayed treatments, and lives that could have been saved by a doctor who was never consulted.
Your chatbot is not your doctor. Please do not let a billionaire convince you otherwise.

