randombio.com | commentary
Saturday, March 31, 2018

Self-driving pedestrians

Self-driving cars will create an arms race in artificial intelligence between cars and pedestrians.


Y ou just know that the first time a self-driving car runs over a pedestrian, all the Luddites will come out of the woodwork to tell us why AI is dangerous. Last week one did, so here I am. Not too late, I hope. What's the time? Oh, damn.

A self-driving car would be great for those times when I get totally sloshed, or if, heaven forbid, my vision and hearing deteriorate with age to the point where I become unable to drive.

But intelligence—actual intelligence, not the corporate Silicon Valley ersatz version—is needed before a self-driving car can travel safely in the real world. It might not seem like it, but many human drivers actually possess a rudimentary intelligence. I've seen human drivers do some pretty clever tricks to escape from being crushed to death under the wheels of passing trucks. There's a very good reason for this.

pedestrians with radar
A number of peds xinging and emitting microwave radiation

The thing that makes driving challenging is that 99.9% of the time, the rules work and little or no intelligence is needed. The purpose of forcing drivers to stay awake is so they can use their brain the other 0.1% of the time. Indeed, we used to talk about people only using 10% of their brain. The truth is, we use 100% of our brain, but only a small proportion of the time, maybe 0.1%—only in emergencies, and only when there is no alter­native. This is why it's so hard to remember where you put your keys, whether you took your pill, or what you ate for dinner yesterday: these things are automatic and don't require learning, so the brain doesn't need to remember them.

The rule-based AI that we have now can easily handle the same 99.9%, but it is in no sense able to deal with those emergencies that we need intelligence for. The only way those self-driving cars can pass a driving test is if the environment is simplified to such an extent that those 0.1% situations where intelligence is needed don't happen.

Fake AI

What passes for AI today is nothing more than a rule-based system combined with pattern recognition. Calling this intelligence debases the term, and I think a lot of people would be comfortable with that: human society has been evolving to become more and more rule-bound. If that trend continues, instead of machines becoming more like us, we will become more like them, and we will come to believe that human intelligence is little more than a collection of heuristic rules. Some scientists believe this already.

Following rules means that we feel less of a need to use our minds to solve problems. This is the real danger of AI. It's not intelligent drones flying around and shooting people. It is that we forget what intelligence really is.

Humans are geniuses at inventing fake problems in order to avoid facing real ones. It's a repeated pattern: scientists discover some phenomenon, the government starts funding it, people see dollar signs, and suddenly there's hype everywhere. So we get hysterical fear-mongering by the actual companies that are developing AI. It's all part of the hype, and sooner or later, it collapses and everyone feels embarrassed until the next fad comes along.

Humans becoming more like machines, and re-defining intelligence to convince ourselves that a rule-based system is all we are. That might satisfy those who like inventing rules for others to follow, but it would start a race to the bottom for humanity, where we simplify our world to allow ourselves to be governed by static rules. Lots of people would think it's great not to have to figure things out—to live in a world where it's already been done for us. We're seeing this already: big Internet companies filter out information that conflicts with the narrative they want us to believe. The urge to simplify the world and stop thinking is apparent on college campuses, where students demand safe spaces and tiny free-speech zones. It's also why so many people refuse to listen to different points of view.

Corporate unintelligence

What's most disturbing about the Uber accident is that, according to Ars Technica, the street was actually brightly lit, not dark at all as the video released by the company makes it appear. The video falsely made it look like the pedestrian could not possibly have been seen until it was too late. Is it so surprising that corporations would hide uncomfortable facts to increase profits and protect themselves from liability? It shouldn't be: we've all seen this dynamic in action. I've been subjected to it for much of my professional life.

Accidents will happen

We can all think of scenarios that would trip up any of today's neural-network-based programs: a toy baby lying in the freeway, a dog or other animal running out, a clogged freeway entrance.

Once upon a time in Texas, I was driving to the lab late at night to finish some work. Unlike most other cities, this city had many streets with few streetlights. That night it was perfectly clear and warm, so I had my window open. No pedestrians or other cars were visible. Just as I was about to make my left turn, my intuition told me to stop. A second later, a group of black people wearing dark clothes, silently riding bikes with no lights or reflectors, rode past me. I could detect their presence only because I had my window open. I never did see them—all I could see was blinking shadows as they passed in front of some lights in the background. If intuition hadn't stopped me from making that left turn, I would have smashed into them.

Now, you might say, self-driving cars will have radar and sonar, and maybe even thermal infrared night vision. But even these can't see around corners. Can a self-driving car locate human voices or intuit when a danger exists, say a pedestrian pointing a firearm at your car, a dangerous pothole, or an object falling from the sky?

Will your self-driving car know when the driver in front is inebriated or texting, or is carrying an unsecured load, or its tailpipe is about to fly off? Will it know when a bullet smashes through your side window, and you need to choose a different route or make an emergency maneuver? How about when the front wheel of some car seizes up just as they pass you on the freeway at 85 mph, creating a vast shower of sparks as its front end plows into the ground in front of you? I've encountered all those driving situations, and they can be tricky.

We also need to understand how societies form. If intelligent cars communicate with each other (which they'll eventually have to do), what's to prevent social pathologies like lemming-like behavior from forming? When a string of self-driving cars meets a bridge that has gone out some rainy night, what will happen? I dread to think.

Complexity increases exponentially

An intelligence, whether real or artificial, can never be more intelligent than the environment it was trained in. Those small self-driving passenger trains that ferry our students from one part of the university to another can be fully autonomous because they have the track to themselves. While their universe may be physically large, conceptually it's very small.

Complexity increases exponentially, not linearly, as environment constraints are removed. An automated car on a fixed track lives in a world that's orders of magnitude less complex than an open road. Calculating the complexity of such a world is beyond our ability at the moment, even when the entire network path of possible events is fully understood. Calculating the ability of a rule-based system to cope with a complex environment is even harder.

Those neural networks don't make self-driving cars intelligent—they're still a set of rules, just unknowable ones. When disasters occur, it's not just because some sleazebag company cut corners. The basic concept is flawed. Until you know how to calculate the complexity of the real world, and can prove that the cars have equal or greater computational power, they should be kept in a simplified, artificial world, like a closed track, with a fixed route, where nothing can go wrong.

The alternative is to put radar in cell phones, so the pedestrians can know where they're going. Put lidar dishes on their heads. They might feel silly, but it's where we're headed. Self-driving cars will create self-driving pedestrians. Of course, sooner or later cars will develop some new threat. It's a classic arms race.


mar 31 2018, 7:33 am


Related Articles

Artificial Intelligence will not wipe out humans
Humans can do that all by themselves, thank you very much. Besides, computers won't kill one of their own.

How close are they to real AI?
We read the textbook on ‘deep learning’ so you don't have to.

The sexbot myth
The news media seem to be obsessed with sex robots. But human sexuality is far too complex for them


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
book reviews
home