“There is no God,” Stephen Hawking writes in his final book, Brief Answers to the Big Questions, which was posthumously published on Tuesday. “No one directs the universe.”
This assertion that we’re on our own – made by the the famed scientist who did more to further our understanding of black holes than anyone before or since – could be especially unnerving to some people when you consider another assertion Hawking makes in his book: Humanity could be doomed by either climate change or artificial intelligence.
In an excerpt published by the Times of London (subscription required), Hawking, who died in March at the age of 76 from a degenerative neurological disease, “warns us that artificial intelligence is likely to outsmart us, that the wealthy are bound to develop into a superhuman species, and that the planet is hurtling toward total inhabitability,” according to Vox.
Hawking’s biggest warning is about the rise of artificial intelligence: It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.
Artificial intelligence holds great opportunity for humanity, encompassing everything from Google’s algorithms to self-driving cars to facial recognition software. The AI we have today, however, is still in its primitive stages. Experts worry about what will happen when that intelligence outpaces us. Or, as Hawking puts it, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
There is no doubt at least one famous person will take heed of Hawking’s warnings: Elon Musk, who has issued his own dire predictions about artificial intelligence. Musk, who wants to help man travel to Mars, has said that “one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity.”
Here’s more from Vox:
Compared to robots, we humans are pretty clunky. Limited by the slow pace of evolution, it takes us generations to iterate. Robots, on the other hand, can improve upon their own design a lot faster, and soon, they’ll probably be able to do so without our help. Hawking says this will create an “intelligence explosion” in which machines could exceed our intelligence “by more than ours exceeds that of snails.”
A lot of people think that the threat of AI centers on it becoming malevolent rather than benevolent. Hawking disabuses us of this concern, saying that the “real risk with AI isn’t malice, but competence.” Basically, AI will be very good at accomplishing its goals; if humans get in the way, we could be in trouble.
“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants,” Hawking writes.
And while he warns about the grave threats posed to humans by robots and a potential climate apocalypse, Vox notes that Hawking remained optimistic about our chances to adapt and survive, writing that “our ingenious race will have found a way to slip the surly bonds of Earth and will therefore survive the disaster.”
Photo: Getty iStock