A Tech Founder Used AI to Save His Dog From Cancer. It Worked.
What a sick dog taught us about AI

You’ve probably seen the headlines.
AI is stealing jobs. AI is making students lazy. AI is dangerous, biased, untrustworthy. Scroll through any news feed and the message is consistent:
Be afraid, be very afraid.
So when a Sydney entrepreneur named Paul Conyngham quietly used ChatGPT, AlphaFold, and Grok to help design the world’s first personalised cancer vaccine for his dying rescue dog, Rosie, you might have missed it.
It didn’t have the same dramatic energy as the fear-based stories. It was just a man, a dog, a problem, and a decision to try.
Rosie had advanced mast cell cancer. Traditional treatment slowed the disease but couldn’t stop it. Conyngham, a data scientist and machine learning engineer with 17 years of experience and no medical training, spent $3,000 sequencing her DNA at UNSW’s Ramaciotti Centre for Genomics.
Conyngham used AI tools to analyse the results, identify potential treatment pathways, and connect with Australian scientists who could help him go further.
Most of Rosie’s tumours have since shrunk. She’s back to chasing rabbits.
This is the story we should be telling young people about AI.
The Narrative Gap
Here’s the problem with how we’re raising the next generation around technology:
We hand them devices, warn them about dangers, and leave them to figure out the rest. The education system is scrambling to respond to AI. Parents and teachers are often working with outdated mental models. And the loudest public voices tend to be either uncritical boosters or catastrophists.
Young people are caught in the middle, often anxious about a technology they use every day but don’t feel equipped to understand. The dominant narrative is a loss narrative.
Jobs disappearing. Industries disrupted. Entire skill sets made obsolete overnight. The question most often asked is not “what will you build with this?” but “how do you protect yourself from it?”
That framing is not entirely wrong. Disruption is real, and dismissing people’s legitimate concerns about their livelihoods does not help anyone. But it is profoundly incomplete, and for young people who are still figuring out what kind of future they want to build, it is actively misleading.
Because here is what the fear narrative misses:
AI does not only take. It also gives.
It gives access. It gives capability.
It gives people without institutional resources the ability to do things that previously required a lab, a team, a budget, and years of formal training.
Paul Conyngham is not a medical researcher. He is an engineer who used AI to learn enough, fast enough, to ask the right questions, find the right people, and build something that had never existed before.
That is not a threat to anyone. That is exactly what the technology is supposed to do.
Curiosity as a Qualification
One of the most underappreciated things about the Rosie story is not the vaccine itself. It is the path that led to it.
Conyngham did not sit with ChatGPT and have it design a cancer vaccine from scratch. What happened was that AI gave him a map.
The AI suggested immunotherapy as a direction, pointed him toward UNSW’s Ramaciotti Centre for Genomics, and helped him develop a plan specific enough to take to actual researchers.
When he showed up at the university, he was not a confused outsider asking vague questions. He was someone with a framework, a direction, and enough informed context to be taken seriously. That is a genuinely new kind of access.
For young people, this story should reframe the entire question of what it means to be qualified. The old model required years of formal training before you could contribute to a field.
That model hasn’t disappeared, but it now sits alongside something genuinely new_:_ the ability and willingness to work with AI to compress learning, navigate expertise, and take on complex problems earlier.
However, this doesn’t mean expertise no longer matters. It means the ceiling of what a motivated young person can do has risen significantly.
Conyngham’s collaboration with UNSW researchers is the model here. He brought curiosity and AI fluency, they brought deep scientific expertise, and together they achieved something neither could have done alone.
What a Responsible Use of AI Looks Like

Using AI well is not just about knowing which prompts to use or which tools to stack. It is about understanding what AI can and cannot do, and knowing when to hand the work over to people who carry real accountability for the outcome.
Conyngham did not try to administer a vaccine he designed by himself. He brought his AI-guided research to qualified scientists and paid for their expertise to take it from hypothesis to treatment.
The vaccine was developed by Thordarson and his team. Rosie’s care was supervised by veterinary professionals throughout. The AI was the starting point, the bridge, and the accelerant. The humans were the ones who bore responsibility for what actually happened.
That distinction is not a limitation but a design principle for using these tools responsibly and effectively. AI is extraordinarily good at helping you explore a problem space, identify connections you would not have found on your own, and generate enough structured understanding to have a meaningful conversation with someone who knows more than you do. It’s not a replacement for that person. It is a way to reach them.
Young people who internalise that distinction early will be better equipped to use AI in ways that are genuinely impactful. Not because they understand every line of how the models work, but because they understand the difference between AI as a starting point and AI as a final answer.
A Gentle Reminder from Rosie
Rosie is back to chasing rabbits.
That image is worth sitting with. A dog that was given months to live is now running in the yard because her owner refused to accept the boundaries of his own expertise and used AI to reach further than those boundaries.
Professor Thordarson put it plainly on X after Rosie’s results came in: this technology can democratise the process of designing cancer treatments. Not just for dogs. For people.

That happened because someone decided AI was a tool for solving problems rather than a source of problems to be afraid of.
The fear narrative will keep generating clicks. It will keep filling feeds with warnings and predictions and anxious commentary about what is being lost. Some of that is legitimate and worth reading.
But every so often, a story comes along that points in a completely different direction. A dog chasing a rabbit at the park. A tumour that shrank. A vaccine that had never existed before, built in under two months, by a community of people that AI helped connect.
That is also the story. Probably the more important one.
