randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science
Wednesday, May 31, 2023 | science commentary

What's the real reason behind all this scaremongering about AI?

AI will pose an enormous challenge to the humans. But not the one they want you to think


M ore catnip for the sensationalist press: dozens of AI researchers, including Sam Altman [the CEO of OpenAI], Geoffrey Hinton, Max Tegmark, Bruce Schneier, Sam Harris, and even David Chalmers have signed a statement saying AI could lead to the extinction of humanity. It's a who's who of celebrity academics, some of whom are reasonably smart and all of whom know full well that AI doesn't exist. It's as much a statement about the state of academia as about AI.

Here's the statement in its entirety:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

What risk are they talking about? The hyperbole about ‘global extinction’ may be a clue. Notice that they didn't include the other popular ‘global extinction’ threats like global warming and inequality. These threats are mostly political and the risk of extinction from them is purely hypothetical. The signers are worried about something, but either they haven't thought about it or they're afraid to say what it is.

Old and new style thermostats

Thermostats have evolved from the crude mechanical Honeywells to modern day smart thermostats

At the moment, AI has little more self-awareness than a thermostat. If the humans ever get around to inventing AI, it will pose challenges to the humans, but not the ones everybody thinks. Indeed, the biggest challenge we face at the moment is the desire to live risk-free. That is one reason we are drowning in bureaucracy, which was created to reduce the risk of somebody making a bad decision. But making life risk-free is itself a risk. It can be argued that there's a huge risk from treating humans as if they're babies that need to be enclosed in a little playpen: a cage that prevents it from exploring the universe for fear of hurting itself.

A commentator named Edward Ring proposed an inspiring vision in which we return to rule of law, affordable energy, meritocracy, and enlightened values that once made America an inspiration to other countries. America has become a country where a career can be destroyed by a casual remark, where banks tell companies what values they must pretend to believe in, and where activists feel entitled to demand millions per person in “repara­tions” from a state that not only never mistreated them but violated the rule of law to give them preferential treatment. Few countries would emulate a culture war that has left a smoldering ruin where where a vibrant and inspiring culture once stood. Distaste for what America is becoming creates doubts about the viability of representative government and gives authoritarianism a stronger appeal.

What I think Ring wants, at its base, is for people to accept risk and meritocracy as the price for general well being. It should not be difficult: unwillingness to accept risk is not an innate, unconditioned trait. Risk aversion is a rational choice in a society where risk-taking is not permitted and success is penalized. And the ultimate expression of risk aversion is found in the above statement about “Safe AI.”

The challenge that AI will pose to humans is not that AI will go around shooting people or turning humans into batteries as our movies predict, or even that it will invent false facts to confuse them. The challenge will be that AI might prove more intellectually capable and more worthy of respect than the humans. That challenge would demand a response from the humans. If we assume for the sake of argument that fear of AI expressed in that statement is sincere and not merely another covert excuse for impos­ing world government, then the solution is not to cripple AI. The solution is to change man.

There are only two ways to do that: by changing their DNA or by ‘enhancing’ their capabilities by use of human-machine technology. It is generally accepted by neurologists that while human intelligence can be reduced by disease or injury, it cannot be improved by teaching or by individual effort. Humans can, of course, learn principles, guidelines, and techniques and use them as a sort of intellectual scaffolding to help them make fewer dumb mistakes. But ideas and principles cannot make humans smarter.

Helping the humans to avoid making dumb mistakes, perhaps by educating them better, would be a great start. But idiots have the advantage of numbers. Sooner or later, idiotic things would happen again unless we somehow improved humans. The alternative would be to allow them to be replaced with artificially intelligent machines.

That is the challenge AI poses to us, and in my opinion, that is what the signers of the Safe AI document are really afraid of. They want to avoid making that choice. AI won't eradicate humans; they'll keep humans around for one reason or another. But AI will force mankind to think about an issue that no one, left or right, wants to think about. It will force our academic thinkers to choose between two unthinkable options. They don't want to do that, and more importantly don't want to be seen as doing that. They'd rather pretend the issue is something else—extinction—even if they get ridiculed for it.

Update, June 02, 2023 Now a person named Mo Gaudat, who the press describe as a former chief business officer at Google X, reportedly said that AI poses a bigger threat than climate change. Now that's something we can all agree with.


may 31 2023, 7:23 am. updated jun 02 2023, 4:45 am


Related Articles

AI will wipe out humanity?
You say that like it's a bad thing

Artificial intelligence, mental telepathy, and theory of mind
If only they could develop a functional AI by next Tuesday, then I wouldn't have to struggle with that dreadful tax software

Will ChatGPT kill all the humans?
An article in Time magazine said to be written by an "AI expert" claims it will. But did a human actually write it?

Emotions are essential for a conscious AI
Robots will never be really conscious until they get the capacity for emotion


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
technology
home