ChatGPT Image 2 Palm Reading is Impressive. Don’t Do It.

The viral trend with irreversible consequences

April 29, 20267 min read
ChatGPT Image 2 Palm Reading is Impressive. Don’t Do It.

If you’re on X, you’ve probably seen it by now. An AI Optimist Linus Ekenstam (@LinusEkenstam) posted a photo of his palm to ChatGPT and got a full palm reading back.

Within hours, hundreds of people were doing the same thing. It’s fun, it’s shareable, and honestly, it’s a little addictive.

Source: X
Source: X

Then another account posted a meme that stopped a lot of people mid-scroll.

“Sir, they’re uploading their 4K biometric data to ChatGPT for free. Surveillance agencies spent years and billions trying to get this.” — **_@_**shiri_shh via X

The meme uploaded by @shiri_shh, along with his satirical tweet.

The community note on Ekenstam’s original post was even more direct:

“Uploading palm photos shares extractable fingerprints and other biometric data that cannot be changed if compromised or misused.”

This is worth taking seriously. Not to kill the fun, but because this is exactly the kind of moment where a charming viral trend quietly creates a data problem people will not notice for years.

The Thing About Biometric Data

Photo by Onur Binay on Unsplash
Photo by Onur Binay on Unsplash

Most people have a rough intuition about data privacy. They know not to share their password. They are cautious about their bank details. They have accepted, reluctantly, that their email address and phone number are already widely available.

Palm prints are different in kind, not just degree. Biometric data, which includes facial geometry, fingerprints, iris scans, voice patterns, and palm prints, is permanent.

If your password leaks, you change it. If your credit card is compromised, you get a new one. But if your palm print, your fingerprint, or your detailed facial geometry ends up in a database that gets breached? There is no replacement. That hand is the hand you have for the rest of your life.

The US Department of Justice made this precise point in a December 2024 final rule that explicitly restricted transactions involving bulk biometric data, including facial images, palm prints, and fingerprints. The rule cited “extraordinary” national security concerns and flagged the risk that this data could be used to develop and enhance AI capabilities, among other misuse scenarios.

When millions of people photograph their palms for a fun chatbot reading, each image becomes a high-resolution biometric sample sitting on a server somewhere. But they’re doing that without understanding how long it’s retained, who can access it, or what happens in the event of a breach.

The answer to all three questions, in most cases, is: you genuinely don’t know.

And that uncertainty is the problem.

The Free AI Photo Editor Problem I Raised Last Year

The viral AI winter photoshoot in 2025
The viral AI winter photoshoot in 2025

I wrote about this dynamic last year in the context of free AI photo editors. A fun tool goes viral. Caricature style. Ghibli filter. Winter photoshoot for the Christmas season.

The friction is zero, the output is immediately shareable, and the upload feels trivial. Millions of people participate because why not?

However, it’s worth noting that what they are uploading is detailed biometric or facial data to a _s_erver they know nothing about, with terms of service that are rarely read. The data retention policy is not something people click through to understand. And the downstream use, whether for model improvement, sale to data brokers, or exposure in a future breach, is genuinely uncertain.

Next time you generate a crowd scene with an AI image tool, take a moment and look at the faces in the background. Some of those faces are procedurally generated from statistical distributions, but some of them feel oddly specific. They appear too detailed for random generation and too distinct to be purely synthetic.

This raises an important question:

Where do you think those faces come from?

I am not making a definitive claim about any particular platform’s practices. What I am saying is that the question is worth asking.

What’s happening now with palm readings is structurally identical. Even if a major platform like OpenAI handles the data responsibly, the broader ecosystem around it does not stay contained. Data brokers exist and operate at scale. Corporate acquisitions happen. APIs get licensed to third parties. Policies that protect you today can change tomorrow.

The entertainment value is immediate. The risk is diffuse and slow. That is exactly the dynamic that makes it easy to dismiss until it is not.

The Actual Risk Breakdown

Photo by kartik programmer on Unsplash
Photo by kartik programmer on Unsplash

Let me be specific about what the concerns actually are, rather than vague about the general direction.

Irreversibility. A compromised palm print cannot be changed. Stolen biometric templates enable spoofing, fraud, and AI-assisted impersonation in ways that persist for the lifetime of the individual. This is not theoretical. It has already happened in documented breaches.

Data handling uncertainty. Uploaded content to a consumer AI platform may be stored, reviewed for safety, or used to improve model performance. Policies emphasise user controls and opt-out mechanisms, but the enforcement of those mechanisms in edge cases such as breaches, subpoenas, company acquisitions, or policy updates is not something users can verify independently.

Cultural normalisation. Each viral biometric sharing trend lowers the collective threshold for what feels acceptable. When palm prints become as casually shared as selfies, the protective instinct that would otherwise pause people before uploading degrades. That cultural shift is slow, hard to reverse, and genuinely costly.

Ecosystem effects. High-quality palm and facial data, even if the original platform handles it responsibly, contributes to the broader capability set for biometric modelling. That capability benefits legitimate applications and potential misuse with equal efficiency.

Practical Steps That Actually Help

Photo by Towfiqu barbhuiya on Unsplash
Photo by Towfiqu barbhuiya on Unsplash

You don’t need to swear off AI to manage this well. Here’s a reasonable approach:

Opt out of training data use. Most major platforms have a setting for this. Find it and use it. It takes less time than the palm reading itself.

Reduce image quality for casual experiments. A blurrier or partially obscured image still allows you to play with the feature without handing over a precise biometric sample.

Use enterprise or paid accounts where the terms around data use are more explicit, and your leverage as a paying customer is higher.

Explore local AI tools for anything genuinely sensitive. Local models process data on your own device and don’t transmit anything externally.

Apply a simple test before uploading. Would you post this image permanently and publicly in full detail? If not, that hesitation is telling you something worth listening to.

And please, regardless of what platforms you use for entertainment, do not rely solely on palm or fingerprint biometrics for your phone unlock or banking authentication if those features can be spoofed by someone who has access to a detailed photograph of your hand.

The Bottom Line

Linus Ekenstam’s palm reading post is delightful. The engagement is understandable. ChatGPT producing a seemingly accurate personality reading from a hand photo is genuinely impressive and a little uncanny, which is exactly why it spreads.

But the fun and the risk are not mutually exclusive. Hundreds or even thousands of people uploading high-resolution palm photographs because a viral post told them it was cool is exactly the kind of moment that produces large biometric datasets without any meaningful informed consent process.

You are allowed to try the thing. Just do it with the same care you would apply to sharing a sensitive document. Ask yourself whether the entertainment value is worth a permanent, irrevocable data point. Most of the time, a quick crop and a lower resolution will let you enjoy the experience with a fraction of the exposure.

The technology is not the problem. The assumption that casual uploads are consequence-free is.


At Elephant Stripes, we prioritise security at every stage of development, ensuring your data is protected with the latest encryption technologies and best practice.

Ready to reinvent your product or workflow for the AI era? Reach out to ElephantStripes custom AI-app development studio.