Sometimes the most important warnings are not the loudest ones. They are the ones spoken by people who helped build the thing they’re warning us about.
This reflection comes from a public lecture I listened to by Nobel Laureate Professor Geoffrey Hinton, titled “AI and Our Future”, delivered in Hobart, Australia, on Wednesday, January 7, 2026 (1:00 pm AEDT). In the talk, he explained how modern AI works. He discussed why it can become difficult to control. He also highlighted why the decisions we make now may matter more than what we try to do later.
His central point, as I understood it, is this:
If we don’t learn how to manage AI while it’s still developing, we may struggle to manage it once it’s fully developed.
The “teen years” problem
AI is already powerful, but it’s also still forming. Like anything in a rapid growth phase, this is when patterns set.
Hinton described how large language models don’t work like traditional software where every step is written, inspected, and understood line-by-line. Instead, they learn from enormous amounts of data by adjusting countless internal “connection strengths.” The result is something effective, but not fully transparent. It can be difficult to explain why it does what it does, and that matters when systems become widely relied on.
In other words: we aren’t just building tools. We’re shaping behavior.
That’s why I think the “teen years” image fits. Adolescence is when capability grows faster than wisdom. It’s when strength and speed arrive before maturity. And if you wait until adulthood to teach self-control, you are already late.
Why later may be harder
One of the most sobering ideas in the talk is that advanced AI may eventually outpace humans. It could surpass us not only in knowledge but also in coordination.
Humans share knowledge slowly: conversation, teaching, experience, time.
Digital systems can share learning far faster. Copies of the same system can learn in parallel, combine what they learn, and improve quickly. That creates a future where capability can scale faster than any single person or institution can match.
So the hard question becomes:
How do we keep powerful systems aligned with human wellbeing as they become more capable?
Not panic. Formation.
I’m not sharing this to create fear. I’m sharing it because it’s a rare moment when a builder of the modern AI world is saying: pay attention to the early years.
In the talk, Hinton suggested we should invest far more effort into safety and control research. Our investment should extend beyond just making AI smarter. That means global cooperation should focus specifically on preventing the loss of human control. This focus is crucial because incentives can align across borders. No one benefits from systems that turn on their creators.
Whether you work in business, education, art, research, or simply live as a person trying to navigate a fast-changing world, this isn’t a “tech topic.” It’s a stewardship topic.
If we don’t shape the values, guardrails, and accountability structures while AI is still forming, we may face issues later. We might find ourselves trying to negotiate with something we can no longer meaningfully guide.
This is the season for wisdom.
Not because we can stop the future, but because we still have a say in how it grows.
Source: “AI and Our Future: A Public Talk” by Nobel Laureate Professor Geoffrey Hinton, City of Hobart, Hobart, Australia, Wednesday, January 7, 2026.


Leave a Reply