Astrophysicist and science journalist Adam Becker has noticed a worrying trend. Tech billionaires are becoming increasingly preoccupied with two visions of the future: techno-utopia on the one hand, and human extinction at the hand of machines on the other.
The problem? These projections not only are unfounded but also undermine our ability to respond to humanity’s problems in the here and now, writes Mr. Becker in his latest book, “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade To Control the Fate of Humanity.” In an interview with the Monitor, he explains what the techno-domination narratives miss. The conversation has been edited for length and clarity.
What led you to write this book?
Why We Wrote This
Between the most extreme claims about artificial intelligence lies a middle ground. A science journalist cuts through the hyperbole and urges people to ask common sense questions.
Tech billionaires are trying to decide the future of humanity for us. Elon Musk says he wants to save the light of consciousness by making human civilization inter-planetary, putting a million people on Mars by 2050 and making it a self-sustaining colony. Eric Schmidt, billionaire venture capitalist and former CEO of Google, says we’re not going to meet our climate goals. So instead, we should use even more energy and throw it at AI data centers so we can get to superintelligent AI, which will solve global warming.
We have Oxford ethicists saying that the danger from a kind of AI that does not exist, and nobody knows how to build, is 50 times greater than the danger from global warming and nuclear weapons combined. When [ethicists] say that, people believe you, and when a billionaire repeats it, we believe them, too. We shouldn’t let them decide the terms of the conversation about the future for us.
What is the ideology of technological salvation?
It’s this idea that all problems can be solved with technology, and that technology will lead to a future of endless growth that will allow us to transcend all limits. It means that all limits can be safely ignored right now in pursuit of that future, or in pursuit of avoiding an apocalyptic nightmare, a sort of mirror image of the utopia.
How does artificial intelligence as it exists now differ from the existential catastrophe scenarios those in your book worry about?
We’re using the phrase “artificial intelligence” for things that are nowhere near that. The AI we have now is an engine for predicting what the next words should be to sound the most like the smeared-out, average voice of the internet. It does not know about the world and the things in it.
The science fictional idea is now called AGI, artificial general intelligence. The fear is that we are really close to creating AGI, and once we have something with the intelligence of a human, it will make itself unfathomably intelligent and then pursue whatever ends it wants, which may not be in line with the survival of humanity. But there are so many questionable steps in that argument. And as it turns out, a fair amount of good science cuts against every single step.
How do you think we got here?
Part of it is by not having good conversations in our society about what the future could look like. So instead, we just figure it’s going to be some sort of science-fictional future. And science fiction isn’t meant to be a realistic depiction of the future.
There seems to be an innate human urge for transcendence. When is that constructive, and when is it harmful?
The problem is that people are doing things without knowing why. OK, you want to transcend the current limits of humanity. That could be good. But understand why you want to do that and look carefully at what you’re doing. The philosopher and author Nick Bostrom talks about the moral urgency of getting at all the usable sources of energy before they run down as the universe ages. He’s making a case for intergalactic conquest so we can extract resources, as many as possible, so we can grow endlessly. Why? Why does an Oxford philosopher want this? Why do tech billionaires want this? To what end?
Do you have a message for those in the tech world?
The way some people in Silicon Valley talk about the future totally absolves them of any responsibility for it. They talk about it like it’s inevitable. They are one of the industries responsible for shaping the future. So, it’s good they think about the future, but they should accept that responsibility and take it seriously.
What did you learn in writing this book about living well here and now?
Elon Musk says we have to get off Earth. Why? It’s nice here. While writing this book, I spent as much time as I could in the world that these people neglect and dismiss. Hiking, camping, backpacking, appreciating the trees and the sky. I turned off my phone as much as possible. I want to actually be present in my life and live my life.