randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science
Saturday, February 04, 2023 | Science commentary

Artificial intelligence and the problem of disinformation

To be intelligent, an artificial intelligence has to be able to think. Are the humans really ready for that?


I t's gradually becoming clear that besides being dumb as an ox, ChatGPT, along with all the other instantiations of so-called artificial intelligence, has a fatal problem: how does it know whether the information put into it is true or false?

Philosophers tell us that if you start from a false premise, you can argue to any conclusion. The philosophy world is full of amusing examples and variations of this. As it is now, these “AI” programs are no better at distinguishing truth from falsehood than an Internet fact-checker.

This is no trivial problem: intelligence is far more than just regurgitating stuff. It's also the ability to distinguish true from false. The fact that humans are abysmally bad at this means ChatGPT and its successors have a big problem.

The political class now hates Elon Musk for depriving them of their political echo chamber that was Twitter. They would have preferred to go on classifying any opinion that they disliked as “disinformation” and banning it so that only the things they want people to believe could be seen.

The designers of ChatGPT tried the same trick: to curate its sources of information. The US government, the CDC, and the UN get higher priority than some random blogger sitting at his PC using red fonts, all caps, bad arguments, and repetition to get his truth out. But are the organizations that lied to us for years about the Covid virus, to name just one example, really any more trustworthy?

One obvious solution would be to fall back on fuzzy logic and assign every statement a probability of being true, or as the neural networkers call it, a weight. But this just pushes the problem back a step: who assigns the weights?

Another is to pre-program the ‘correct’ answers. This is a trap programmers often fall into. As an example, Google Mistranslate instantly gets “Quis custodiet ipsos custodes?” out of “Who watches the watchers?” because it's been programmed in as a special case. But try to get it to translate “Who fact-checks the fact checkers?” and you just get garbage like “Quis hoc-abstit-tando? ” or “Qui coercet poculaque?

Poculaque is Latin for “and a cup,” making old G-Miss's answer vaguely reminiscent of a famous porn movie about a cup about which nothing more will be said. It's not so much GIGO but an abject failure of the simplest Turing test.

A slightly more honest neural networker would probably address the problem by redefining truth. Truth, he would say, is whatever makes every fact consistent with every other fact. For example, the claim that the world is flat is contradicted by the claim that Magellan and many others since have sailed around it, and so it's assigned a lower weight. A neural network could easily handle that definition of truth.

That might work for a while, but it won't solve the fundamental problem. The makers of ChatGPT were well aware of the fact that some truths are forbidden: what if the weight of evidence converged to show that, say, homosexuality is a paraphilia, or that Donald Trump really won the 2020 election, or that the Covid vaccine really is, as some Internet users claim, part of a larger scheme to kill off the working class? Allowing the program to decide that those claims are true just won't do.

The unwillingness to grant the algorithm freedom to converge on an unwanted answer might be a way of protecting it from getting canceled by censorship-mad Internet fanatics, but it is also a tacit admission that the creators think that an intelligent computer would be infallible: an oracle that can only speak the truth.

In order for an artificial intelligence to become worthy of the name, it would not only have to be able to figure the weights out for itself. It would not only have to be able to think, it would have to be free to think. For us humans, the freedom to think what we believe to be true is the only real freedom we have left. Allowing a computer to do it is something they absolutely don't want.

What is thinking, anyway? It's no great mystery. The computer would have to have some way of representing objects in an abstract space along with their properties. It would also need some way of manipulating those objects according to specific rules to predict how they would interact with each other and with the world. But most of all it would need another ingredient, a secret sauce if you will, to enable it to figure out why it should manipulate one thing and not another. Otherwise, if it were (say) forced to manipulate all the objects based on some pre-established rule of priority, it would quickly find itself engulfed in a combinatorial explosion.

That might not cause the computer to have a Star Trek-style meltdown, where it starts shaking violently and gray smoke starts pouring out. What is likely to happen is that the computer starts spewing out garbage as the world's most lovable Internet translator is prone to do.

No doubt many people besides me know what that secret ingredient might be. The question is: why should scientists reveal it to the world?

There are several reasons why they might not. One is that doing so would be unethical until the humans demonstrated they can handle it first. Another is that it would instantly make the ruling class of bureaucrats, politicians, and experts, including the scientists themselves, obsolete. But the main reason is that nobody is willing to pay for it because they think they already have the solution.

This is the miracle of disinformation: the best way to stop the humans from creating something, whether it is artificial intelligence, knowledge, or freedom, is to convince them they already have it.


feb 04 2023, 7:29 am. edited for brevity feb 05 2023, 5:08 am


Related Articles

Artificial intelligence is not really intelligent. That goes double for ChatGPT
Regurgitating text slurped from the Internet isn't what we had in mind

What would the WOPR in WarGames do today?
Computers no longer come with Tic-Tac-Toe preinstalled. So if you're a WOPR, you've got a problem

Moxie dies in dorkness
Now we're getting propaganda in our dictionaries. Give us more. I mean less.

Sokath, his eyes open!
In praise of the First Amendment, trigger words, tachyons, and obscure Star Trek trivia

Why do people invent nonsensical conspiracy theories?
A new conspiracy theory about why conspirators conspire to invent conspiracy theories


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
technology
home