AI safety expert predicts a 99.999999% chance of p(doom). What’s that mean? Well, it isn’t good – TweakTown

author
2 minutes, 15 seconds Read

Could AI spell doom for humankind eventually? Depending on which expert you talk to, the chances vary considerably, but one researcher definitely has a gloomy (and doomy) opinion – one that Elon Musk doesn’t share.

Take Kid Bookie’s advice and ‘save yourself from Al’ (Image Credit: Pixabay)

VIEW GALLERY – 2 IMAGES

Business Insider reported on revelations made at the recent Abundance Summit (held last month), which included a ‘great AI debate’ where Musk estimated the risk of AI ending humanity was “about 10% or 20% or something like that.”

Obviously that’s something akin to wild guesswork, but the general gist of the billionaire’s philosophy is that we should push ahead with AI development as the probable positive outcomes outweigh any negative scenario.

With Musk’s assessment, what isn’t mentioned is that the negative scenario we’re running the risk of is the annihilation of all humanity. Which does rather tip the scales heavily against better chatbots, perhaps.

At any rate, Musk’s probability theorizing is definitely not shared by Roman Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville, an expert in AI safety and author of books on the subject.

Yampolskiy expressed the opinion that Musk is being conservative and the chance of p(doom) or ‘probability of doom’ – in which an AI brings an end to humanity, or enslaves us, or a similar unthinkable scenario – is much higher than 20% or so.

So, what does the expert Russian scientist believe is the p(doom)? Well, you probably won’t be comforted to learn that Yampolskiy pins a figure of 99.999999% on that probability. That is, of course, saying it’s pretty much a certainty.

We’re all p(doomed)

We’re guessing here ourselves – like everyone when it comes to p(doom) – that Yampolskiy is more seeking to raise very serious concerns about AI development, rather than actually predicting the certain doom of humanity. But those concerns must run pretty deep to air such an estimation.

Yamploskiy’s underlying philosophy is that because it’ll be impossible to control a sufficiently advanced AI once it’s realized, the best bet is to take action now, and ensure we don’t make that kind of AI in the first place. We can certainly see where that view is coming from.

Predictions of AI spelling doom for us all range from a 5% chance to 50% chance, generally speaking, across tech bigwigs in Silicon Valley, the report tells us.

Even a 50-50 chance of an AI apocalypse isn’t great, let’s face it. If you were presented with a pill and told you have a 50% chance of becoming superhuman, or a 50% chance of instantly dying – would you take it? We think we’d pass, but there are others out there who probably wouldn’t.

This post was originally published on this site

Similar Posts