Elon Musk, serial entrepreneur and innovator, can’t be accused of thinking small. his vision has been instrumental in rethinking the ways we travel to space, how we travel down the coast, and even how we get around town. So when he says Artificial Intelligence is probably “the greatest existential threat” humanity faces and suggests that creating it is tantamount to “summoning the demon,” it’s difficult to dismiss his concerns as coming from small-mindedness. Should we be worried? Does he know what he’s talking about?

The answer is he, at any rate, doesn’t not know what he’s talking about: Musk was an early investor in one of the earliest and best known AI development firms, DeepMind. If the company sounds familiar, it’s because DeepMind is the basket that Google put all its AI eggs in, having acquired it in 2013. Unlike Google, Musk’s interest in Deepmind was not born out of a desire to capitalize on it. Rather, he invested so he could  “keep an eye on what’s going on with Artificial Intelligence.” It’s not that he believes the people behind its development have bad intentions. It’s just that people might accidentally create something evil. Say, for example, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

Does Musk seem a little hysterical? Well, maybe. But maybe, in this situation, hysterical is a reasonable thing to be. Consider that Shane Legg, a machine learning researcher and one of the founders of DeepMind, doesn’t exactly disagree. “I think human extinction will probably occur,” he’s said, “and technology will likely play a part.” If Musk, practically a professional tech innovator and forecaster, and Legg, one of the founders of a top AI company, are basically agreeing that artificial intelligence poses an existential threat to human survival, why aren’t we putting a stop to this whole AI thing immediately?

Well, Musk and Legg aren’t the only voices weighing in. There’s Ray Kurzweil, director of engineering at Google and famous technology prognosticator in his own right. Bill Gates has called him “the best person I know at predicting the future of artificial intelligence.” What does he think? He thinks that computers will be better than us at everything by 2029, and that we’re no more than 30 years from an event he has termed the “singularity,” a point in time when artificial intelligence and human intelligence will merge to create a new hybrid super-intelligence.

One might find that a concerning set of predictions, but Kurzweil certainly doesn’t; he famously takes 90 pills a day, hoping to keep himself alive long enough to be part of it all. When asked by a Vanity Fair reporter about what happens if AI turns nasty, he seemed unphased. His solution? take care of it with “an AI on your side that’s even smarter.” The conversation did not apparently extend to what happens if that AI stops playing nice.

Then there’s Steve Wozniak, co-founder of Apple, Inc., who previously went on the record to say that the future will be “scary and very bad for people.” More specifically, it seemed unclear to him what role humans will have in a future run by intelligent computers. “Will we be the gods? The family pets?” he asked. “Or will we be ants that get stepped on?”

But recently, Wozniak has had a change of heart. He no longer fears our future artificial overlords. They’ll be so smart, he thinks, that they’ll know better than us anyway: “They’ll be so smart by then that they’ll know how to keep nature, and humans are part of nature. They’re going to help us.” So, what will it be? Are we heading toward a future of super-intelligent technology that crushes us like ants? Or a future where our intelligences meld? Or will we be somewhere in the middle, treated as a loved but dim family dog?

Perhaps, in the end, we humans won’t have a say. But perhaps we might, if we begin to consider how to develop artificial morality in tandem with artificial intelligence. Researchers at MIT, for instance, have created the Moral Machine, an attempt to crowdsource morality for self-driving cars. People who visit the website are presented with moral decisions a driver might be faced with and asked to judge the best course of action. The researchers then provided this aggregated data to an AI and asked it to predict how humans would want it to react to similar scenarios.

A possible flaw here is that your average internet user isn’t an expert on ethics. If the internet has taught us anything about human behavior, it’s that crowds of anonymous people might not be the place to look for moral guidance. With that in mind, maybe crowdsourcing isn’t the solution. Nevertheless, at least the moral dimension of artificial intelligence is becoming part of the conversation.

Germany recently published the first comprehensive set of ethical guidelines for self-driving car programming. DeepMind, the AI project that first stoked Musk’s fears, has put in place a board to “explore the key ethical challenges facing the field of AI.” Humanity very well might not stay the masters of the intelligence we create. But if we play our cards just right, maybe the artificial intelligence we help launch can reflect the best that humanity has to offer, made even better.

Or maybe it’ll be a Terminator.

Image via Adobe Stock

TokenVerse loves a lively discussion or debate, but we expect everyone to do so in a respectful and polite manner.