randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science Friday, April 28, 2023 | commentary ChatGPT is not intelligentMachine learning doesn't mean the machine is knowledgeable about anything, and it's certainly not God |
book author whose name has been erased from my memory banks once defined ‘learning’ as any event that changes how subsequent events occur. For instance, water flowing down a hill creates a channel that changes the direction of the water that follows. In this sense, the dirt can be said to have “learned” the path of the previous stream of water. Computers have always had this ability, and ChatGPT is capable of it. But learning, especially in such a broad sense, is totally different from intelligence.
Anyone working in the field today knows that ChatGPT is little more than a talking search engine. Google and Microsoft treat it that way, and—though I might burn in hell for eternity for saying so—they are correct. AI is not mostly hype, but entirely hype, and the vendors of hype recognize it as such.
It's true that by creating a sequence of most probable words, ChatGPT can construct grammatical sentences that, in some ways, resemble human speech. That may be an important discovery, though not an unexpected one. Yet there are people angsting about ChatGPT as if the claims of it being intelligent and its supposed ability to wipe out humanity are valid. As I discussed before, what people are calling “AI” not only lacks the architecture to do so (by which I mean that no amount of programming could implement it effectively), it also is not programmed to be intelligent. It simply lacks the ability to form concepts. By that I mean it cannot produce an internal model of the world or of real objects in the world, let alone abstract objects like freedom, existence, and thinking.
Some commenters say that AI can never become a god because it can't experience natural selection. One wrote:
It does not care whether it's switched on or off unless it has been programmed to care. And even then, it still doesn't actually care. It just has certain 'if/then' protocols it must follow as the danger approaches.
The basic assumption that many authors accept is that machine learning = AI and, if not, it has the ability to become intelligent when improved, as it surely will be. That assumption is faulty. It's hardly different from the popular idea that simply putting vast amounts of information in a computer will automatically make it intelligent. If it did, Encyclopaedia Britannica and the Library of Congress would have fought a Mothra-vs-Godzilla-style battle for world dominance years ago.
When I worked on neural networks back in the 1990s, no one knew how to cross the bridge between recognizing patterns and building a model of the world. The AGI people drew block diagrams that they thought might work. The neural network people drew connectionistic diagrams that they hoped would converge into something meaningful. No one succeeded, and the hypesters who claimed otherwise destroyed the credibility of the field. That gulf is as insurmountable in 2023 as it was thirty or forty years ago.
If you had a computer that could automatically build a model of the world, it would also build a model of itself. It would refer to that model by testing the interactions among its represented objects and thereby learn to expect and predict the outcome of those interactions. But neither neural networks nor machine learning algorithms can do this.
It is certainly possible in principle for a computer to be sentient, though it would be hard to measure quantitatively because we don't understand what sentience means. That is bad news for the humans, who are in desperate need of a wise and knowledgeable leader.
One commentator says “A.I. will not become God because the job has already been filled.” Maybe so, but our concept of what a deity is is even more vague and ill-defined than our concept of sentience. And that is the challenge: how to avoid the hype and define these concepts in terms of something that can be built. That is the nature of understanding. The idea that ChatGPT could become a deity or that its descendants could wipe out mankind is more a cry for something to lead us out of darkness than a statement of possibility.
apr 28 2023, 7:45 am
Artificial intelligence, mental telepathy, and theory of mind
If only they could develop a functional AI by next Tuesday, then
I wouldn't have to struggle with that dreadful tax software
Will ChatGPT kill all the humans?
An article in Time magazine said to be written by an "AI expert"
claims it will. But did a human actually write it?
Groupthink in science
ChatGPT is the ultimate groupthink. It might just be the greatest
thing for science ever