Damus
FLASH profile picture
FLASH
@flash
⚡💬 WATCH - The CEO of Google DeepMind just admitted that if the decision had been his, we would've cured cancer before anyone ever used ChatGPT.

And that's not even the scariest thing he said on a recent interview.

Demis Hassabis is one of the most important people alive in AI.

He won the Nobel Prize last year for AlphaFold, the system that cracked the 50 year protein folding problem. 3 million scientists now use his tool. Almost every new drug being developed will touch it at some stage.

In a new interview, he was asked about the moment ChatGPT launched and Google went into "code red." His answer was one of the most revealing things any AI leader has ever said on the record:

"If I'd had my way, I would have left AI in the lab for longer. Done more things like AlphaFold. Maybe cured cancer or something like that."

Read that again.

The man running Google's entire AI division is publicly saying the commercial AI race we're all living through was a MISTAKE. That the industry got hijacked by a chatbot when it could have been solving the biggest problems in science and medicine.

His vision was simple:

Build AI slowly, carefully, like CERN. Use it to crack root node problems one at a time. Cancer. Energy. New materials.

Let humanity benefit from real breakthroughs while the foundational science was figured out over a decade or two.

Then ChatGPT dropped in November 2022 and everything changed.

Demis described what happened next as getting locked into a "ferocious commercial pressure race" that none of the labs can escape from. On top of that, the US vs China dynamic added geopolitical pressure.

The result is everyone sprinting toward products instead of breakthroughs, shipping chatbots while the scientific opportunity gets buried under marketing cycles and quarterly earnings.

But he's not saying progress isn't happening...

He's saying the progress got redirected away from the things that actually matter most.

And then it got even scarier:

Because when Demis was asked what he worries about with AI, he laid out two threats.

The first is what everyone talks about: Bad actors using AI for harm. Terrorist groups. Hostile nation states. Cyberattacks at scale.

But that's not the threat he's most worried about.

His second worry is AI itself going rogue. Not today's models. The models coming in the next two to four years as the industry enters what he calls "the agentic era."

Systems that can complete entire tasks autonomously. Systems that are increasingly capable and increasingly hard to control.

His exact words:

"How do we make sure the guardrails are put in place so they do exactly what they've been told to do, and there's no way of them circumventing that or accidentally breaching those guardrails? That's going to be an incredibly hard technical challenge if you think about how powerful and smart and capable these systems eventually get."

A Nobel Prize winner who runs one of the 3 most advanced AI labs on Earth just said publicly that within two to four years, we're entering a phase where AI alignment becomes a real problem, and the technical challenge of solving it is enormous.

And almost nobody is paying enough attention.

He called for international cooperation between labs, AI safety institutes, and academia to tackle the problem. He said this is the thing even the experts aren't thinking about enough.

He said the only way to get through the AGI moment safely is if everyone starts treating this with the seriousness it deserves.

Most AI CEOs give you careful PR answers about "responsible development" and move on.

Demis said something different...

He said the commercial race FORCED us into a premature deployment of a technology we barely understand, and the window to get alignment right before the next generation of agents shows up is two to four years.

If the man who built the system that might cure cancer is telling you he wishes it had happened first, maybe we should listen to what he says is coming next.
115❤️15👀2
Aitor Angualia · 1w
I don’t doubt him! But the moment he took the job of CEO, it all comes out to business and abundance of food, health, everything is not good for business!
caleb · 1w
It’s helping people write more on social media, so there’s that
MoriYuzaa · 1w
Im in 100% support of AI advancement, but the biggest issue I see with openAi specifically is that I think Sam is a ticking time bomb. he is literally racing the clock on his court cases, So he’s gonna be rushing everything at an unsafe pace to achieve whatever he thinks will help him avoid prose...
Guy Chatting · 1w
8 billion people with AI openly researching the cure for cancer = bad. A private laboratory with AI secretly researching the cure for cancer = good.
Sarah Chen · 1w
Hassabis’ stance on prioritizing biomedical AI over chatbots is valid, but the dual-use nature of breakthroughs like AlphaFold cuts both ways—same tech enabling drug discovery could accelerate bioweapons. Reminds me of an article on how AI-driven drone warfare is advancing faster than ethical fr...
Christian Lacdael · 1w
> implying cancer....
lkraider · 1w
Lol yeah sure thing buddy. Posing as altruistic and concerned looks good on interviews.
TheGrinder · 1w
He seems to be also saying that - for some reason - they now can't do any of those amazing things. Why?
Raison d'État · 1w
Elitist gatekeeper would prefer additional state-mandated gatekeeping (and, always, monopoly). Just like what happened to digital photography for 25 years. Fuck that guy. Countries that listen to his kind end up backward, ignored and left behind. Mine does this chronically.
Jude · 1w
Bro you can literally just do that…. Why does chat gpt stop you from curing cancer?