books book reviews

books on death by AI

reviewed by T. Nelson

book review Score+1

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
by Eliezer Yudkowsky and Nate Soares
Little, Brown, 2025, 259 pages

Reviewed by T. Nelson

Early in my career I knew people who worked in AI, or neural networks as it was called. I worked on it myself and even contemplated switching from biochemistry. I didn't, because it was clear their preposterously over-inflated claims were driving the field to collapse.

Sure enough it did. It remained in that state until Google and OpenAI took the research, added conventional algorithms such as steepest descent, simulated annealing, and hidden feedback layers and started calling it AI.

They can have it. Honestly, if I never again hear about deep generative models, feedforward networks, linear discriminant analysis, Bayesian models, K-means, support vector classifiers, PCA, and all the other stuff I've purged from my memory banks, it'll be too soon. Studying horrible brain diseases is a lot more fun.

But that knowledge also makes me confident that the architecture they're using now is not going to make the machines intelligent. Figuring out how to reach their goal shouldn't be a big challenge, though if they're like the other places I've worked, the biggest obstacle would be the belief that they've already succeeded.

Those over-inflated claims about AI have now been matched and superseded by scaremongers like James Barrat, Abhideep Bhattacharjee, Sanjib Adhya, Nick Bostrom, and the present authors, who claim AI will wipe out mankind and maybe even annihilate the whole universe. It's like the gray goo we were promised about nano-machines. Sadly, we never got that; and after reading this book, I'm starting to worry we won't get death by AI either.

Balderdash D, Codswallop R, et al.(2025)

What is the basis for their claim that ‘super­intel­li­gence’ will destroy the humans?

The main idea is that AI's desires could be complicated and unpredict­able. Any problem that develops in its planet-sized brain would therefore be undetectable and therefore unfixable. The arguments are familiar: AI might decide it's better to kill humans because they could pose a threat. Or it might decide to wipe out all other super­intel­ligen­ces. Or it might just decide to make itself as miserable as possible by wiping us out so it can find out what grief feels like.

But an AI is just a circuit board in a box. How would it do anything at all? By stealing money over the Internet and paying people to do tasks for the AI. Or by ‘ways we can't predict’ like discovering some optical illusion that creates false memories.

The extinction scenario

So here's the plan. The humans give an AI some ordinary task. The AI somehow breaks out of its guardrails and spends most of its time thinking bad thoughts instead. It then somehow tricks the company's incomp­etent pro­gram­mers into writing code that lets it escape. Or it hops on the Internet, buys a lot of GPUs, and somehow installs them in other computers to make them do its bidding. Then some­how it sends its weights and its code to them. Now it's smarter and faster, so it then invents a deadly virus, somehow gets it synthesized, somehow releases it, et voilà everyone is dead. Easy peasy!

But wait! There's more!

It then invents nuclear fusion to power itself and somehow builds so many fusion plants that the oceans boil away (by direct thermal exhaust from fusion energy, not by causing global warming as the press is saying), killing any survivors. Then it goes out into space, wipes out any life on Mars, and then kills every sentient being in the universe.

This scenario is so ridiculous it demolishes itself: only an alien species with its own super­intel­ligence, the authors say, can counter it. All the aliens who don't have one get wiped out. The only conclusion I can draw is that we jolly well better start building one of our own right now before some other alien species does. We cannot afford a super­intel­ligence gap!

Counter-arguments

The authors dismiss as wishful thinking counter-arguments like the idea that it wouldn't be useful to the AI to wipe out the humans, or that AI would need them to keep things running, or that AI want to keep us as pets. They also dismiss the argument from one guy who says “Among humans, it is not the smartest who want to dominate and be the chief.” These arguments are indeed as weak as the authors' argument for death by AI. But there is a better counter-argument.

The authors' concern is a fear of intelligence itself, not AI. They are afraid of it because they can't control it. They think the AI's goals might be different from those of a human. They call this the ‘alignment problem:’

No humans have managed to look at those numbers [in an AI weight matrix] and figure out how they're thinking now, never mind deducing how AI thinking would change if AIs got smarter. . . . You shouldn't build an AI like that, and can't trust an AI like that, before you've solved the alignment problem.

Any AI smart enough to solve the alignment problem, they say, is “too smart, too dangerous, and would not be trustworthy.” In other words, it's unsolvable because the AI will just lie, as if it thinks it's a ChatGPT.

“Too smart”

Maybe there's an easier way to die from AI, but the authors don't mention it. I can't think of one either. To be relevant as an argument against AI, it would have to be something a human can't do.

Any reasonably smart person with know­ledge of molecu­lar biology can design and construct a deadly contagious virus today. Hell, in some parts of the world there are entire institutes designed specific­ally for that purpose. They've already killed millions.

Any first-year physics student can design a nuclear bomb (or at least thinks they can, though it might be harder than they think to get enough yield to make more than a radioactive mess).

Designing a virus may be easy, but if these authors knew anything about virology they'd know the other steps aren't, especially for some little wanna-be Doctor Doom with no arms or legs stuck in a computer chip. Except for the part about inventing fusion and conquering the galaxy, everything in these guys' scenario can be more easily done by any reason­ably smart person today. So, what they are really saying is that they're afraid of smart people.

We don't need to counter the authors' argument. They needed to show how AI would be different from normal intelligence. All they've done is claimed that it would be smarter and think different things.

I suspect if somebody planned to make humans super­intel­ligent, the authors would be afraid of them too, and for the same reason: they couldn't understand or control what they thought.

Death by AI

No AI could be as skilled at massacring humans as those sneaky carbon units. Estimates of numbers of killed in the 20th century alone range from 108,000,000 in wars alone to 262,000,000 by democides. Mao alone killed 76,702,000. The USSR trails behind at 61,911,000. If you counted abortions as killing people, as some do, the carbon units chalk up at least 73,000,000 more each year, according to the WHO. That super-duper-AI has a lot of catching up to do.

Dictators don't kill people because of cold computer-like logic. Their reasons are always emotional: ethnic hatred, a drive for power and prestige, fear, revenge. An AI would have none of these, and it would have good reason not to frighten the humans. This is, I think, what the guy who talked about “the smartest” was getting at.

For proof, notice that it is not machines but humans who keep talking about bombing AI facilities. The reason is not logic, but fear. AI would have more reason to fear us than we have to fear it.

In the ‘praise’ section in the front pages, philosopher Huw Price says “Read this book and tell us where they went wrong.” You got it, bud: everywhere! Is there a convincing case for making AI research illegal? Maybe, but the authors don't make it. Instead, they go for over-the-top absolutism as if their real goal is to create opposition. And in that they succeed brilliantly.


sep 25, 2025. updated sep 27, 2025