Wednesday, September 25, 2019

The Weekly Screed (#927)

The eternal light on my dashboard
By David Benjamin

“A car doesn’t understand why it’s driving anywhere.”
— Bart Selman, professor of computer science,  Cornell University

PARIS — The first car I ever drove (with a license) was my mother’s 1961 Ford Fairlane, referred to affectionately by friends as the Brown Bomb. Because Mom’s Ford was a used car, ill-used, badly maintained and prematurely decrepit, I became prematurely, by necessity, an automotive diagnostician.
 
Mind you, unlike my kid brother Bill, I never evolved into a “car guy.” I’ve never been able to lift the hood, name every part amongst the cylinders, wires, hoses and screw-top lids, then instantly spot the veeblefetzer that’s gone haywire. But for survival’s sake, I refined my senses to get the feel of when the Brown Bomb might be ready to explode (or simply burp disgustingly and cease to move). 

An early sign that something might be queer under the Bomb’s hood was a “funny smell.” I would proceed to sniff, almost audibly, ’til the smell built either into a catastrophic stench or — I hoped — fade gradually from olfaction, a false alarm, probably just a piece of rotted rubber falling onto a hot manifold.
 
“Funny sounds” — the ones that go away as soon as you bring the car to a mechanic (“I don’t hear nothin‘, kid) — were even more ominous. Thumps, clicks, rattles, ka-chunks and sudden squeals that evoked images of tiny kittens being fed through a clothes wringer — these all pumped an icy chill into my panic-prone ventricles. Again, all I could do was keep listening, strain to locate the source of the perilous clatter, and hope to Christ — please, please! — that it would go away.
 
Usually, it didn’t (until I got it to the deaf mechanic).

Last week, I was a guest at the AutoSens Conference in Brussels. It took place at the Auto World Museum, a sort of vintage-car Disneyland. I strolled the galleries hoping nostalgically to spot a ’61 Fairlane. No such luck. The Brown Bomb was a distinctly proletarian conveyance, not a “classic” worthy of historical remembrance among the Bugattis, Roadmasters and Horches on display in Brussels.
 
The AutoSens show posed a paradox. The museum presented to the conferees glamorous, glistening examples of a bygone automotive age whose ingenious, symphonic mechanics are giving way — inside new-model cars — to electronics. And this show was all about those new electronics.

One of the clichés of an auto industry that now includes non-car companies like Google (Waymo), Hitachi and Texas Instruments is that a contemporary car is “a computer on wheels.” Within a generation, the family sedan will be an odorless electric vehicle (EV), driven largely or entirely by itself using advanced driver assistance systems (ADAS) and artificial intelligence (AI). Its only discernible utterance will be a steady, dull, unsyncopated hum.

Utopia!

Of course, I didn’t belong at the AutoSens show. I’m no more a technologist than I am a car guy. I can articulate no coherent challenge to the promises posed — and announced long before this conference — about a future of autonomous vehicles (AV) tooling along at 120-per without driver or steering wheel, its passengers cocooned on glove-leather cushions, oblivious to the fleeting landscape, smelling nothing, untroubled by even the tiniest knock, tweet or rhythmic thunk.
 
Nevertheless, stubbornly, I husband my doubts. For instance, there’s this annoying light on the dashboard of my own car, a cozy little low-mileage 2014 Mazda. The other day, suddenly, the Mazda beeped anxiously. A symbol lit up on my dashboard, suggesting trouble with my tires. I hate and dread tire problems. Hurriedly, I pulled into a service station and pumped my Dunlops up to 32 psi.
 
The light stayed on. The beep did regular encores.

So, whaddya think? I took the car to a mechanic. He pumped my Dunlops up to 36 psi, looked them over, declared them sound and assured me that the Mazda’s tire sensors would notice that my pressure was once more hunky-dory. It would “re-set” intelligently in a few minutes. That was three weeks ago.

The light still glows. The beeping goes on.

One of the terms that haunts the brave new world of automotive electronics is “black box.” Most people know this dire phase from the aftermath of airline disasters, when investigators hunt for the “black box” flight-data recorder that contains every human, aeronautic and electronic event leading up to the crash.
 
In artificial intelligence (AI), specifically “machine learning,” the “black box” is more virtual than palpable. As an AI system absorbs colossal amounts of data — far more than humanly possible — the machine “learns” how to apply its data to real-life challenges. It learns how to choose, decide and (sort of) think.
 
A pet example that illustrates the awesome potential of machine learning is the intricate Asian board game of Go, similar to chess but with vastly more possible moves. Today, the world’s best Go player is no longer a person. It’s a computer so fast and data-intensive that it can humiliate any human Go master anywhere.

But here’s the rub, as stated by Bart Selman, a Cornell computer science professor whom I met at AutoSens. “The machine doesn’t know it’s playing Go.” One of the basic truths of artificial intelligence is that a machine that can do something better than anyone else can do it has no idea what it’s doing.

Nor, and here’s the second rub, do any humans know what it’s doing.
 
As it grows and learns, the AI box gets blacker and blacker. Computer scientists don’t know how it understands, whether it understands or why it makes its choices. So far, there’s no way for people to crack the box and figure it out.

Nevertheless, carmakers and tech companies all over the world are full speed ahead, installing artificial intelligence in the (lucrative) Cars of Tomorrow.
 
Not long ago, a more-or-less experimental Car of Tomorrow, driven on regular public roads by an AI system (with a really trusting passenger), looked through its visual sensors at the side panel of an 18-wheel, 80-ton semi-tractor-trailer making an illegal left turn. But it didn’t see a truck. It saw what it thought was the sky and so, it kept going — smack-dab into what turned to be a truck after all. The trusting human inside the Car of Tomorrow did not make tomorrow.

My mother was often afraid of driving the Brown Bomb, especially on roads with big trucks. Mom was prone to panic in the proximity of a Peterbilt cab-over. Mom’s fear made her a hazard. I learned this before I was allowed to drive. So, when I got my license, I took over for Mom whenever possible. By the same token, whenever I could coax the wheel from a friend who had drunk too much, or was too old, too blind or simply reckless, I did so.
 
People recognize the hazards posed by people. This is why we drive for them whenever we can, and why we accept the 30,000 traffic fatalities that happen every year on America’s roads. We understand human frailty.
 
What we don’t understand is electronic frailty.
 
Car designers have already proven that adding a measure of autonomy — with early ADAS applications — can reduce the carnage on the highways. But machines will make mistakes, typically worse than the false-negative that keeps blinking and beeping on my dashboard. Machines that screw up do so very rarely, but when they do, it is sudden, silent, secret and mysterious. They will do so without remorse or reflection, because they don’t know what a mistake is. People skilled at reading other people are helpless to foresee — or react quickly enough — to an error that originates inside a virtual black box that doesn’t care one way or the other.

We won’t be able to smell why the car thinks a tire is flat when the tire’s not flat. We won’t hear how the car saw a deer ahead in the road where there is no deer.

Our next-of-kin will never learn why the car thought it saw the cloudless sky and accelerated into a bridge abutment painted blue.

No comments: