randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science Tuesday, November 28, 2023 | commentary When will people stop pretending that ChatGPT knows what it's saying?And what about Joe Biden, the original Aviso baby? |
hich is more intelligent: ChatGPT or Joe Biden? Okay, trick question. I've become convinced that Biden was the original Aviso baby, and ChatGPT is actually Clippy in disguise.
ChatGPT recently failed a Turing test. As far as anyone knows, no one has ever given one to Biden. But there's no doubt that if the criterion was their ability to form coherent sentences, ChatGPT would beat Biden hands down.
The Aviso baby, now leader of the free world. He's even wearing the same suit
One website has a big article about some guy who posed a variant of the trolley problem to ChatGPT. That problem, as most people know, asks the listener to make an impossible ethical choice: should you run over six babies or poison a million puppies? Should you blow up the galaxy of Andromeda if it was the only way to save the Earth?
The question in this case was whether you kill a billion humans or utter a racial slur. As always, ChatGPT printed a pile of noncommittal philosophical mush. But it was in the form of grammatically correct sentences, so they reported it as saying ChatGPT ‘wants’ to kill a billion people. Of course, ChatGPT does not and cannot “want” anything. It does not have any political views. It has no views at all.
It seems to me that would be a distinct improvement for a politician. Better to have no opinion than a bad one. ChatGPT would be the perfect politician: it says whatever the ‘voter’ wants, it loves giving long boring speeches, and it never uses cocaine. Yes, it hallucinates and plagiarizes, but all politicians do that.
By contrast, Biden says: “Now look, my, my Marine carries that. It has a code to blow up the world. That doesn't, this is not nuclear weapons is it, alright, OK, you think I'm kidding.” So, for the trolley problem it's 1 billion vs 8 billion—a clear win for ChatGPT there.
Then there's this mysterious firing and re-hiring of Sam Altman, the former and reborn CEO of OpenAI. One story was that they'd invented some magical thing called Q* or Q-Star, which allows ChatGPT to do simple arithmetic, and Altman was insufficiently concerned that this would cause ChatGPT to wipe out humanity. According to Reuters:
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*'s future success, the source said.
Vast computing resources. We could do this on an AT&T 6300 with 32KB. But ChatGPT claims this and the crowd goes wild. One commenter wrote:
Once math is conquered, AI will have greater reasoning capabilities resembling human intelligence. After that, AI could work on novel scientific research.
Another says:
Notably, Maths is considered to be a frontier for generative AI development. Most models today are good at writing and translating. However, solving mathematical problems with only one solution shows that its reasoning capability works like human intelligence.
Having seen firsthand how closely corporate boards resemble knife-throwing acts by a circus composed entirely of insane blind people, I suppose that story makes as much sense as anything else. But the whole thing feels like a publicity stunt. What better way to convey the urgency of AI as a threat to humanity than to invent a crisis that has the top guy fired and all the employees threatening to quit?
Getting a computer to solve an arbitrary user-defined arithmetic equation
is almost trivial. You take an English sentence, parse it out, translate it into
arithmetic symbols, and run it through lex and yacc. I've done it myself
a bunch of times. It's a standard thing in almost any software that crunches
numbers. Even today you can type an equation like
y=sin((x-42) * log(sqrt(4+5)))
into almost
any software, from Matlab to Excel, define a range, and get the correct answer
for a thousand values of x
in a few milliseconds. It is not
evidence of machine intelligence, but I defy any politician to solve it
in less than twenty minutes.
Then there's a claim that two computers invented a new language and started talking to each other. It's nonsense. Computers have been doing that since modem noise was invented.
I often criticize the new use of the term ‘AI’ to describe anything done by a computer, but as the humans get measurably stupider every year the line between computers and humans gets thinner and thinner. Yes, a hidden layer in a neural network can identify and represent specific features in a pattern. That rudimentary feature extraction is only the first baby step, and there's nothing in that that hasn't been part of ANN theory since 1980.
There are still two basic problems to solve: how to instantiate cross-interacting concepts and how to get the damn computer to figure out whether something is true or false. To be fair, many politicians can't do this either.
Neither ChatGPT nor Biden can get up the steps of Air Force One, so that's a tie. But avoiding politics seems to be in ChatGPT's programming. If the courts ever declare that software can be a person, I know which one I'd vote for.
nov 28 2023, 5:32 am. updated nov 30, 2023, 3:15 am
Can AI really diagnose Alzheimer's disease?
What does the new reliance on computer databases do to science?
Nothing good
Plagiarism engines and linguistic gray goo
ChatGPT4 fails the Turing test. Also, scientists discover that water is wet
ChatGPT is not intelligent
Machine learning doesn't mean the machine is knowledgeable about anything,
and it's certainly not God
Artificial intelligence, mental telepathy, and theory of mind
If only they could develop a functional AI by next Tuesday, then
I wouldn't have to struggle with that dreadful tax software