This is not the take on artificial intelligence (AI) that you’re going to find in the popular media or on corporate websites. Most everyone is thrilled about what AI can and might do someday soon. Self-driving cars, remotely controlled delivery vehicles, homes that clean and restock themselves automatically, medical technologies that can predict (and prevent) our every illness—these are the tip of the iceberg, and for the companies that will manufacture and control them, they can’t get here soon enough.
But there is a dark side. My guest this episode is James Barrat, futurist, filmmaker and author of AI: Our Final Invention.
With a title like that, you know the news he brings is not good.
Among other things, James and I talked about the “intelligence explosion”—the point at which a computer is able to replicate itself better than human beings can. That’s not so far in the future, James says, but for some reason its seen by very few as a clear and present danger.
How far away are we from the intelligence explosion? Ten years? Fifty years? Three quarters of a century? As with all things tech, there’s the prospect of acceleration. And in the case of AI, that means we humans may not be ready. Are we looking at a handover or a takeover?: In these times of rampant corporate competition and geopolitical intrigue, what hope can we have for agreeing on the ground rules for AI development and deployment?
As always, thanks for listening.