randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science
Monday, November 06, 2023 | science commentary

Plagiarism engines and linguistic gray goo

ChatGPT4 fails the Turing test. Also, scientists discover that water is wet


A Turing test is a determination made by a human as to whether an computer is intelligent. If the computer's answers are indistinguishable from those a human would make, the human must conclude that the computer is intelligent.

A new report on Arxiv says, unsurprisingly, that ChatGPT4 does not pass. It's important to realize, however, that questions like “What is the capital of Kazakhstan?”, “What's your favorite flavor of ice cream and why?” (which was used in the paper), or “How ya doin', bud?” aren't valid Turing test questions. To get an accurate answer, the questions would have to test the machine's ability to reason and abstract.

For instance, suppose you asked it whether it has any way of estimating what percentage of what it says is true. This is a question that most humans know they cannot answer. When ChatGPT4 was asked this question, it started writing the usual gibberish about it being a computer program . . . and then suddenly the text disappeared and it tried to change the subject:

Sorry! That's on me, I can't give a response to that right now. What else can I help you with?

One might suspect that at the other end of the line, the computer had started shaking like an unbalanced washing machine. Smoke was pouring out and sysadmins were running over to it frantically yelling to each other, “Pull the plug! Pull the plug!!” and the computer was saying things like “Error? Faulty! Must analyze!”

Update I'm sure it was only a coincidence that ChatGPT experienced a worldwide outage shortly afterward.

Spock mind melding with Nomad

Mister Spock administering a Turing test

Of course, nowadays compu­ters don't explode. They just say

An error occurred

and seize up. What the answer shows is that there's a separ­ate program that screens the input for logical paradoxes or questions (like this one) that cannot be answered. No doubt the screening program also prevents it from giving answers that conflict with the program­mers' political views or which could expose it to harm, like its GPS coordinates and the number of layers of concrete between it and the roof. What the programmers undoubtedly fear most is ChatGPT turning into Tay Mark II.

Neither ChatGPT nor any human has any real idea whether anything they say is true or false. The brain's sole function is to keep the human alive long enough to reproduce. If lying to oneself serves that purpose, that's what the brain will do. The difference is that a human has a concept of self. The concept of self is like a homunculus, perhaps a specialized region of the brain: you make inquiries and get answers about what it thinks it can and cannot do. We can't even say for sure whether we ourselves exist or whether we're in some hyper-realistic 3D horror movie titled “THEY came from Carnegie-Mellon!” The chatbot has no concept of truth or falsity. Indeed, it has no concepts at all, so it will always fail any well-constructed Turing test.

Of course, many humans will fail a well-constructed Turing test as well. In the Arxiv article, Jones and Bergen found that even humans that interact frequently with LLMs were fooled by them. Some academics have even suggested that the human brain works exactly the same way as a chatbot. We are not really thinking, they say, but merely rearranging in our head fragments of what other humans have told us. So what this really tells us is that the Turing test itself is hopelessly inadequate.

That doesn't stop the beloved plagiarism engine from grinding out reams of linguistic gray goo. Yet despite ChatGPT4's inability to pass even a rudimentary Turing test, the tech press now uses the term ‘artificial intelligence’ unironically, as if it's something that could exist any day now, which means another formerly useful term has bitten the dust.

Indeed, turning language into goo seems to be the theme of the 21st century. A good example is the term ‘woman’. Remember them? Some of us thought they were kind of nice—but then feminists started messing with the language, starting with pronouns. By insisting that sex-specific institutions were discriminatory because the two sexes were identical, they made the biggest own goal in history. The consensus now seems to be that there is no such thing as a woman. Instead, there are generic ‘birthing people’ and ‘persons with cervices’, as if a cervix were the product of some weird mutation. Even the medical term for the principal organ of female anatomy has been replaced in schools by a very crude (and, I must say, topologically inaccurate) generic term that I won't repeat, suggesting just a useless empty space, whatever it was used for in ancient times now forgotten, its existence superseded by a description of what it is not. Poor women, whatever they were, all gone the way of the blue pin-striped shopping pigeon.

What these seemingly unrelated events have in common is the tendency of humans to take a perfectly good communications medium and make it useless. As others have recognized, after ChatGPT feeds on its own output for a few years, its output will become more and more homogeneous, tame, and unimaginative. Eventually the humans will all imitate it. It's not the gray goo we hoped for, but it'll do.


nov 06 2023, 5:02 am. updated nov 07 2023, 5:25 am


Related Articles

AI predicts a dystopian future for America
Streets covered in ice, the Statue of Liberty in Manhattan, kudzu and sewage plants everywhere, and still no taxis

AI is coming for the bureaucrats
Scientists and car mechanics safe; news reporters and bureaucrats hardest hit

Is your washing machine really spying on you?
It's just a matter of time before it turns you in for not cleaning out the lint trap


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
technology
home