Top AI researcher says AI will end humanity and we should stop developing it now — but don’t worry, Elon Musk … – TechRadar

author
2 minutes, 11 seconds Read

Here’s something cheery to consider the next time you use an AI tool. Most people involved in artificial intelligence think it could end humanity. That’s the bad news. The good news is that the odds of it happening vary wildly depending on who you listen to.

p(doom) is the “probability of doom” or the chances that AI takes over the planet or does something to destroy us, such as create a biological weapon or start a nuclear war. At the cheeriest end of the p(doom) scale, Yann LeCun, one of the “three godfathers of AI”, who currently works at Meta, places the chances at <0.01%, or less likely than an asteroid wiping us out.

Sadly, no one else is even close to being so optimistic. Geoff Hinton, one of the other three godfathers of AI, says there’s a 10% chance AI will wipe us out in the next 20 years, and Yoshua Bengio, the third of the three godfathers of AI, raises the figure to 20%.

99.999999% chance

At the most pessimistic end of the scale is Roman Yampolskiy, an AI safety scientist and director of the Cyber Security Laboratory at the University of Louisville. He believes it’s pretty much guaranteed to happen. He places the odds of AI wiping out humanity at 99.999999%.

Elon Musk, speaking in a “Great AI Debate” seminar at the four-day Abundance Summit earlier this month, said, “I think there’s some chance that it will end humanity. I probably agree with Geoff Hinton that it’s about 10% or 20% or something like that,” before adding, “I think that the probable positive scenario outweighs the negative scenario.”

In response, Yampolskiy told Business Insider he thought Musk was “a bit too conservative” in his guesstimate and that we should abandon development of the technology now because it would be near impossible to control AI once it becomes more advanced.

“Not sure why he thinks it is a good idea to pursue this technology anyway,” Yamploskiy said. “If he [Musk] is concerned about competitors getting there first, it doesn’t matter as uncontrolled superintelligence is equally bad, no matter who makes it come into existence.”

At the Summit, Musk had a solution to avoiding AI wiping out humanity. “Don’t force it to lie, even if the truth is unpleasant,” Musk said. “It’s very important. Don’t make the AI lie.”

If you’re wondering where other AI researchers and forecasters are currently placed on the p(doom) scale, you can check out the list here.

More from TechRadar Pro

This post was originally published on this site

Similar Posts