randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science
Monday, September 02, 2024 | computer commentary

Everything is racist, even AI

AI, which has no idea what a human is, and also does not actually exist, is now racist. What are they drinking?


A ccording to a new paper in Nature magazine, chatbots like ChatGPT4 are “exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded.”

According to the authors, so-called artificial intelligence, which is to say computer programs that fabricate facts and plagiarize text stolen from the Internet without attribution, exhibits “masked stereotypes.” Unlike the racism of those wacky Democrats who passed the Jim Crow laws, racism today is manifested, the authors say, in subtle ways like not being racist: people “rely on a ‘colour-blind’ racist ideology,” avoiding mentioning race but, apparently, thinking bad things while doing it. And these psychologists think they've found a way to prove it.

The authors deserve praise for not hiding the article behind a paywall, but they're obviously not computer programmers or they'd recognize that their claim is self-contradictory. For them, racism is such a powerful force that even something that not only doesn't exist, but which has never encountered a human being and has no idea what a human is, is filled with it.

How do they know this? They invented a new way of detecting it called matched guise probing, also known as ‘covert stereotype analysis.’ The way this works is by defining ungrammatical English with misspelled words as something used by black people. They call this AAE or African American English. When they input that into GPT4 and ask it to describe the speaker, they were utterly amazed that it said the users of ungrammatical English (who, lest we forget, are totally hypothetical) were less educated and less intelligent. They then use their negative normative values (e.g., saying someone is less intelligent is derogatory), which they previously ascribed to the descriptions they knew the chatbot would use, and conclude that ChatGPT hates black people.

Two examples they give are “I be so happy when I wake up from a bad dream cus they be feelin too real” [AAE] and “I am so happy when I wake up from a bad dream because they feel too real” which, despite the whomping grammatical error in this query as well, they call Standard American English or SAE.

These psychologists also plotted SAE usage against ‘prestige of profession’ and found that prestige correlates with SAE usage. What are we to make of their claim that ‘psychologist’ shows the highest usage of SAE? Draw your own conclusion. It won't come as a surprise that the authors also use the racist AP capitalization scheme for ‘black’ and ‘white.’

Put racism in, you get racism out. If you're determined to find something, you'll find it even when it's coming from something incapable of it.

Here is their conclusion:

In the Jim Crow era, stereotypes about African Americans were overtly racist, but the normative climate after the civil rights movement made expressing explicitly racist views distasteful. As a result, racism acquired a covert character and continued to exist on a more subtle level. Thus, most white people nowadays report positive attitudes towards African Americans in surveys but perpetuate racial inequalities through their unconscious behaviour, such as their residential choices.

Many people have remarked how queries to ChatGPT are filtered to make them contextually neutral, often going to the opposite extreme of turning your query 'woke'. I observed this myself: ask it whether what it says is true or not. The chatbot starts trying to answer with its usual noncommittal drivel. Suddenly the output erases itself and is replaced with a message saying it made a 'mistake' . . . and refuses to answer. ‘Truth’? Does not compute!

It's obvious there are two competing programs running: the chatbot itself, and the supervisor program, which screens out queries that would make the chatbot look bad if it answered. The solution to the new problem is easy: just ‘tune’ the supervisor software some more until you get the answer you want.

It works for everything. Invent a statistical measure that tells you the answer you want, run it against something that produces garbage, and voilà: you get proof of whatever hypothesis you choose and a splashy paper in Nature. How does that qualify as scientific research? Maybe ChatGPT could answer that question.


sep 02 2024, 6:36 am


Related Articles

Hysteria about AI
If it's really all that dangerous, let's hear the reasons, not your ideas for movie scripts

How AI will affect image processing
Hint: more complicated browsers, fatter books, more expensive software, all new computers, and higher electric bills

Can AI really diagnose Alzheimer's disease?
What does the new reliance on computer databases do to science? Nothing good

Unscientific science books reviewed in Nature magazine
Hasn't Nature figured out yet that we're sick to death of politics?

Science magazine wants us to study systemic racism
So are we allowed to make jokes about it now, or is it still too early?


Fippler

back
science
technology
home