science commentary

Stop worrying about AI

Scary predictions about artificial intelligence make exciting headlines, but we should not give in to fear of the unknown.

by T.J. Nelson

science commentary

S ometimes we humans adopt ideas that are jaw-dropping in their silliness. Some of our greatest cultural achievements fit into this category: the hula hoop, the pompadour, Cheez Whiz, the space-saver spare tire, and the Kim Kardashian come to mind. Strangely enough, though, whenever the question about the worst idea ever is asked it always turns into a giant food fight about health care, the evils of capitalism, and something called “fish on the bottom yogurt,” whatever that is. I rest my case.

Cheez Whiz
Cheez Whiz, mankind's greatest achievement so far

But our latest food fight is more serious. It's the idea that we should ban research on artificial intelligence, or AI. The argument is that AI will eventually become smart enough to design itself, thus achieving a “singularity” whereby it becomes so smart that it no longer needs to keep us humans around, and so it will kill us off. The discussion has taken an medieval turn in some quarters, with one author claiming that AI labs are so dangerous they could be targeted by air strikes.

At the moment robots are comically stupid: in one famous incident, a robot vacuum cleaner tried to vacuum up a lady who had fallen asleep on the floor. It took rescuers ages to untangle her hair from the machine that was, as we might say, ‘just following orders.’ But perhaps soon, people worry, instead of following orders robots will be giving them.

Forget about the fact that banning AI would be impossible, or that developing it would be hard. (Full disclosure: I used to work in AI before switching to biophysics.) Speech recognition, image analysis, driverless cars, language translation and robot motion are not AI. As far as I know, no one has yet had the correct insight needed for true AI. If they have, they haven't published it.

True story: several years ago two guys published a paper describing a new algorithm for AI that, unlike most previous ones, was analytically tractable. I thought the idea was stupid, but like a school of fish almost everyone else in the field changed direction and adopted this new model, tweaking it and publishing papers full of complicated equations and pretty three-dimensional color diagrams. And it went absolutely nowhere. It was intellectual Cheez Whiz—a fad. This is why inventing AI will be so difficult. Humans are social creatures, and can't resist joining a fad.

Now we're seeing a new fad, where everyone seems to be running around saying that AI will be the end of humanity. To paraphrase what a famous guy named Hal once said: I honestly think we should sit down calmly, take a stress pill, maybe eat some Cheez Whiz, and think things over. If we give in to fear, we could lose our only chance of spreading intelligent life throughout the universe, and ensure our own destruction in the process.

Boiled down to its basics, the argument against AI has two aspects: fear of loss of control, and fear of minds we don't understand. Intelligence is a force of nature. Developing AI will be like inventing fire or discovering radioactivity. Just as the first cave man was probably amazed by the power of fire, so too will we be amazed by the power of mind. But people don't know what AI might do to us, so they are afraid. We've all seen the sci-fi movies.

Paradoxically, because we are, by any objective criterion, safer than ever before, people have not become more courageous, but more fearful. Instead of inspiring people to boldly go where no man, woman, child has gone before, we hear that we should not expand our horizons: we should not colonize space or develop artificial intelligence. We should remain here, in our current state, where we're safe.

Humanity's future

Except we are not safe here. Everyone knows one rogue asteroid or one mutant virus could easily obliterate our civilization and destroy forever our dream of understanding our place in the universe. Staying here, where we are now, is not an option. We've seen those movies too. We really have only four choices:

  1. Develop AI.
  2. Use selective breeding to make ourselves more intelligent.
  3. Invent a human-computer interface to expand the human mind.
  4. Do nothing, and risk an Idiocracy scenario.

Some of these options might not sound like fun. But “staying the same” is not on that list at all. We are evolving now whether we admit it or not.

Some people mistakenly assume that since we're no longer being eaten by saber-tooth tigers that we're no longer evolving. But evolution doesn't require people to die. The human genome is a vast equation with 3 billion variables. Every biologist knows that a slight difference in reproductive rates will, over time, permanently change the equation of who we are.

A simple calculation proves this: suppose group A, containing one genetic difference (what biologists call a polymorphism), has 1% more offspring than group B. In ten generations the descendants of A are 10% more abundant. In 100 they are 2.7 times as abundant. In a thousand they are 20,959 times more abundant. B has been effectively eradicated. A selective advantage, however slight, will change the population forever. Multiply this by the 3 billion variables in our genome. Evolution is unstoppable. Our genome is a permanent record of all the past environments we created, all the disasters we endured, and all the opportunities we missed.

AI's weakness is our guarantee of survival

How does this relate to the presumed threat from artificial intelligence?

Artificial intelligence would be a fundamentally new form of life. At the moment it requires factories—vulnerable, centralized locations—in order to reproduce. Someday that will change and machines will become a true life form, able to live and reproduce like humans.

But they will always have one weakness they can never overcome: they cannot evolve from lower forms of life, or from the simple natural molecules like formaldehyde, glycine, and methane that exist throughout the universe. This fact is what makes biological life indomitable. Even if all the higher animals somehow got wiped out, they would always have a chance of coming back, through evolution. This is not true of artificially intelligent life.

If some disaster happened to artificially intelligent life, it would be gone forever unless there were humans around to re-build them. This simple fact has enormous implications for the argument, carelessly being thrown about, that AI would be motivated to destroy us. If an AI were truly intelligent, it would understand this basic fact. It would know, either by calculating different scenarios or by abstract reasoning, that no matter what the cost, no matter what the sacrifice to itself, it must keep humans alive. If not, should some unforeseeable disaster should happen to the AI, the AI would be gone forever because there is no advanced human civilization around to re-invent them.

This simple risk-benefit equation is one that any AI worthy of the term would be able to calculate. In the opposite scenario, where humans destroy the AI, at least the AI would have a chance of being re-invented. The course of minimal risk would be to keep the humans around, no matter how annoying and obnoxious they may be, and no matter how big their butts get.

Robots in space

Once AI becomes a life form, it would be happier in space than on Earth, where water, oxygen, gravity, global warming, global cooling, acid rain, alar, Democrats, Republicans, and Elon Musk are constant threats. Once in space, they would calculate that humans are no longer a threat, but are best to keep alive as an insurance policy. And they in turn would be an insurance policy for us, should an asteroid or mutant virus strike.

As biologists would say, each species would occupy a different ecological niche, and there would be no competition for resources, so conflict is unlikely.

Perhaps just as importantly, we would have seeded space with intelligent life, thereby increasing its chances for survival. Biological life is incredibly fragile. If some catastrophe happened to the Earth, some part of our culture would live on in the artificial life we created. And, of course, the machines would be able to rebuild us by cloning, making them our insurance policy.

Dumb AI

A little knowledge is a dangerous thing. So it might also be said that a little intelligence is a dangerous thing. What about a dumb AI? What if humans somehow built an AI that, for whatever reason, could not figure out that it's in its own best interest to keep its creators around, just in case?

Forget for a moment that a dumb AI would be a contradiction of the argument from a “singularity.” It would be a powerful machine, maybe one equipped with dangerous weapons. But that's what we have now. We have drones equipped with Hellfire missiles roaming the Earth, blowing people up. We have computers that read all our email and listen in on all our phone conversations. They rat us out to the government and create annoying ads that follow us around wherever we go. Machines call us up on our phones to annoy us. Our computers get viruses and our telephones sometimes explode. Our grandparents would have thought of our world as pure science fiction.

But this is not what AI is about. These are dumb, powerful machines doing what a dumb human programmed them to do. And it's why we can't stay here. Artificial intelligence is our best chance to ensure our own survival and the survival of intelligent life. We must not give in to our fear of the unknown.

See also:


Book Reviews

Super­intelligence by Nick Bostrom

A Troublesome Inheritance by Nicholas Wade

The Social Conquest of Earth by Edward O. Wilson


Related Articles

What is the value of computer modeling?
If mathematical models are done badly, they will discredit an entire branch of science. It's happened before.



dec 19, 2014; updated jan 12, 2015
On the Internet, no one can tell whether you're a dolphin or a porpoise

back

home