AI: Navigating the Future with Wisdom
Yuval Noah Harari, the historian and thinker who has often served as a modern-day oracle, warns us of the dangers inherent in wielding power we don’t fully understand.
Drawing on ancient myths like the fall of Phaethon and Goethe’s The Sorcerer’s Apprentice, Harari presents a stark reminder: history is littered with examples of what happens when we overestimate our control over the forces we unleash.
The Illusion of Control
AI isn’t just another cool IT tool. Unlike the inventions of the past, which were firmly under human command, AI has the potential -and the objective is- that it operates independently, to make decisions, to innovate in ways that even its creators cannot predict.
This is where the danger lies.
We are not just developing a new technology; we are creating a force that could outthink, outmaneuver, and outlast us. Imagine an apprentice in a workshop, given the power to enchant a broom to carry water.
It’s a clever trick, a timesaver. But when that broom refuses to stop, when it multiplies and floods the room, the apprentice’s cleverness turns into catastrophe.
Now scale that up to a global level, and replace that broom with AI. Harari’s point is clear: the powers we summon could easily slip beyond our control, with consequences we are not prepared to face.
AlphaGo’s “move 37”
AI’s potential isn’t just in performing tasks faster or more efficiently than humans—it’s in the ability to think differently, to explore strategies and solutions that we might never consider. Take the now-famous example of AlphaGo’s “move 37.”
When the AI made this seemingly illogical move in a game of Go, it stunned the experts. What looked like a mistake turned out to be a stroke of genius, a strategy that no human had ever conceived in thousands of years of playing the game.
But here’s the unsettling part: even after AlphaGo’s victory, its creators couldn’t fully explain why the AI made that move. It was a glimpse into the alien nature of AI’s thinking—a kind of intelligence that operates on a level we can’t easily understand or predict.
This “black box” problem is more than a technical challenge; it’s a profound existential risk. If AI can make decisions that affect our lives—decisions that we don’t fully understand—then who’s really in control? What happens when these decisions move beyond games and into the realms of finance, security, or governance? The very foundation of democracy, which relies on transparency and accountability, could be at risk.
Our Collective Wisdom
AI isn’t an unstoppable force of nature—it’s a creation of humanity. The choices we make now will determine whether AI becomes a tool for progress -or not. But to make the right choices, we need to cultivate a collective wisdom that goes beyond the excitement of technological innovation.
This isn’t just about building better AI; it’s about building better societies—societies that value transparency, accountability, and the ethical use of power. It’s about ensuring that our technological advancements are guided by the principles of justice, compassion, and a deep respect for the complexity of the world we live in.
The AI revolution is not just a technological shift. It challenges us to rethink what it means to be human, what kind of future we want to create.
In this moment of profound change, the most important question we can ask ourselves is not can we do this, but how should we?