randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science
Wednesday, May 10, 2023 | science commentary

AI will wipe out humanity?

You say that like it's a bad thing


W hat do sexbots, artificial intelligence, and global warming have in common? They're all based on fear of what other people might think or do. They all hypothesize an exponential increase in some danger. And they all demand that the government do something about it.

Take sexbots (please!). The controversy about them coincided with the MeToo movement, when women with stereotyped views of men tried to use the legal system to enforce a particular standard of behavior: men should never explain things to women, minimize the space they occupied, and generally be meek and apologetic. MeToo died after the Kavanaugh nomination, when it became clear that the movement had turned political, and the panic about sexbots mostly disappeared.

Global warming, too, turned political. The last fifty-three years of GW hysteria produced a stalemate with both sides claiming that science backed them up. Scientists, for the most part, will never bite the hand that feeds them, so they went along with it despite their misgivings. As I'll discuss in a future article, there are fatal flaws in both sides of the AGW argument.

Artificial intelligence is not political yet, which makes it a welcome respite for people to express their anxiety about what certain (other) bad people might use it for if anyone ever invented it. But as always happens, there are people trying to politicize it.

An example is an article in BMJ Global Health by a group of authors from places like International Physicians for the Prevention of Nuclear War (IPPNW), Women of Colour Advancing Peace and Security, and the London School of Hygiene & Tropical Medicine. The authors are not experts on AI, but globalism activists. They call on the medical and public health community “to engage in evidence-based advocacy for safe AI, rooted in the precautionary principle.” They write:

AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts. . . . the window of opportunity to avoid serious and potentially existential harms is closing.

These political activists want to “raise the alarm” about AI, on the one in a hundred chance someone ever invents it. Here's what they say AI will do:

  1. Take away people's jobs, thereby making people use drugs and becoming overweight
  2. Kill people autonomously using lethal autonomous weapons systems (LAWS)
  3. Put people the activists dislike (such as you-know-who) into power, thereby ‘subverting democracy’
  4. Generate misinformation
  5. Sexbots!
  6. More Sexbots!

OK, I added the last two myself, but it may have escaped the authors' notice that we have most of these things already. Destroying jobs, wrecking the economy, infringing freedom of speech, and slaughtering humans by the million are and always have been the province of governments. So it stands to reason government would be opposed to something else displacing them. As for misinformation, I'm sorry, but the news media have that one covered from head to toe.

Leave it to the humans to get all worked up about something that doesn't exist and might not exist for a century, if ever. In many ways it resembles controversies about the nature of a hypothesized deity in the years preceding the wars of religion in the 17th century. There's a good reason for that: if an AI is ever invented, people will automatically consider it an all-knowing, infallible oracle.

But with all due respect for Geoffrey Hinton, there simply is no known way to get from pattern recognition and machine learning, which we have now, to actual intelligence. In all previous systems, like IBM's Big Blue, this step was provided by the human programmers. None of the extant neural network architectures is capable of doing this, even in principle. For heaven's sake, we're talking about a field that thought a fully connected network that could barely form patterns was “intelligent.” It wasn't, and neither are multi-layer feedback networks or anything else I've seen.

The fact remains that unsupervised concept formation is an unsolved problem. Yes, a solution could appear tomorrow, but more likely, as with controlled nuclear fusion, it will be perpetually fifty years in the future.

Even the human brain has not solved the problem entirely: we have special brain regions that do nothing but recognize faces and regions that decide whether we should be afraid. We have an innate fear of snakes and high places, which is evidence of extensive pre-programming by evolution. The extent of our programming is only beginning to be understood, and much of it is still controversial.

The Biden administration has even proposed an AI control plan with Kamala Harris as its czar. The principles of the so-called AI Bill of Rights might sound nice, but putting her in charge shows that even the US government doesn't believe AI is much of a threat.

What the activists really fear is that AI will think things the activists can't control. What it will do is to give us sexbots. If by some miracle artificial sweethearts become popular, and considering the artificial food that people eat there's every reason to think they will, it would mean people prefer to mate with machines instead of each other. What would it say about people when a machine becomes more desirable than a person of the opposite sex? Of more relevance, what does it say when they think people of the opposite sex would prefer to mate with a robot than with them?

The general problem behind all three of these anxieties, assuming for the sake of argument that they are genuine, is that people don't have much trust in each other. People can make rational decisions or they can create hysteria about hypothetical problems. If they do the latter, they'll just politicize them, and the discussion will turn into another silly battle.


may 10 2023, 8:45 am


Related Articles

Femzilla versus the sexbots
Feminism will permanently change how humans reproduce. We might not like it.

Artificial intelligence, mental telepathy, and theory of mind
If only they could develop a functional AI by next Tuesday, then I wouldn't have to struggle with that dreadful tax software

Will ChatGPT kill all the humans?
An article in Time magazine said to be written by an "AI expert" claims it will. But did a human actually write it?

Emotions are essential for a conscious AI
Robots will never be really conscious until they get the capacity for emotion


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
technology
home