Upcoming Events:
Thursday, 22 August, 1 pm
Book Talk, “Why Books?”, Fitchburg Community Center, 5510 Lacy Rd., Fitchburg, Wis.
Thursday, 19 September, 6:30 pm
Book Talk, “Why Books, and Why This Book?”, Oregon Public Library, 200 N. Alpine Parkway, Oregon, Wis.
Subscribe to my YouTube Channel
The attack of the box spring
The attack of the box spring
by David Benjamin
“… Miss Glory, robots are not people. They are mechanically much better than we are, they have an amazing ability to understand things, but they don’t have a soul. Young Rossum created something much more sophisticated than Nature ever did — technically at least!… ”
— Karel Capek, R.U.R. (Rossum’s Universal Robots), 1921
MADISON, Wis. — Ever since Karel Capek introduced the idea in his play, R.U.R., the tortured relationship between humans and the robots they’ve built has been a recurring science-fiction theme — so prevalent that’s it’s become a little tiresome. Isaac Asimov expanded the concept eloquently in I, Robot and Philip K. Dick turned the theme into serious literature with his classic novel, Do Androids Dream of Electric Sheep?, which became the disturbing sci-fi film, Blade Runner — in which Harrison Ford’s grim job is to kill robots who’ve gone human.
In the movies, Stanley Kubrick set the tone for man/machine tension with his contrarian computer, the HAL 9000, in 2001: A Space Odyssey. Since then, machine mischief has escalated all the way to robot Apocalypse in the Terminator flicks, in which a global computer network tramples Asimov’s three laws of robotics and exterminates humanity.
All good fun, but make-believe. Less amusing are the thousands of non-fiction robotic machines and computers that have displaced millions of manufacturing jobs all over the world. The next big shock is robo-cars, known by their auto industry euphemism as “autonomous vehicles.” Soon, according to Detroit, Toyota and various gruppenfeuhrers at Benz and Daimler, we’re going to be paring our nails, balancing our books and making sweet, sweet love in the back seat while our cars plummet along the autobahn — untouched by human hands — dodging self-driven eighteen-wheelers at 120 miles an hour.
I know slightly more about this startling development than the average bear because my wife, Hotlips, is a technology reporter who covers the auto beat, especially all those electronic synapses that now snap and crackle deep within the fast-growing brain of your typical late-model sedan. One of her duties is to chronicle what happens when your trusty robo-car goes inexplicably haywire, as happened recently in Florida. A trusting robo-car early adaptor allegedly let go the wheel and died when the “Autopilot” in his Tesla apparently mistook the side of a turning semi-trailer truck for, um… the sky? Or something. Nobody knows. The car’s not saying.
The technology that made this unique fatality possible is a branch of “artificial intelligence” called “machine learning.” By digesting and comparing data in gargantuan amounts at speeds inconceivable to the human brain, computing devices learn to recognize patterns. A robo-car’s sensors, for example, can distinguish a deer crossing the road from a wind-blown leaf. The car knows to stop before hitting the deer, but ignores the leaf.
The hitch is that a machine requires a lot more data than a person. Your average human toddler can figure out the difference by looking at deer and leaves a few hundred times, probably less, especially if Mommy helps. The motherless computer needs millions of “deer views” and “leaf views” to match wits with the toddler.
Plus, teaching the kid is a lot cheaper than uploading all those deer and leaf images. One of my bosses, John Ketteringham, once said, “Why are we spending all this money on artificial intelligence when we can get the real thing for free?”
But cost is not really the object. Among Hotlips’ favorite technologists is Philip Koopman, who teaches robotics at Carnegie-Mellon University. He knows as much about anyone in the world about machine learning, but nobody in the world knows exactly how machines learn — not even Koopman. This is why Dave, in 2001, was so surprised when HAL said, “I’m sorry, Dave. I’m afraid I can’t do that.”
A machine can memorize a billion possibilities and arrange them into coherent patterns. Trouble is, in real life, there are more weird variables than are dreamt of in HAL’s philosophy. As Koopman said, “When something really unusual happens, people – human drivers – would at least realize something unusual has happened.”
Example: I used to live in Boston, a city widely regarded as having America’s worst drivers. One morning, I’m on a stretch of the infamous Southeast Expressway roughly situated between Sister Corita’s gas tanks and North Station. This narrow, oil-slicked four-lane deathway — riddled with potholes, lined with mangled auto parts and populated by maniacs — is a combat zone that would rattle the nerves of Chuck Yaeger.
Despite the constant congestion on this hellish corridor, every driver — according to Boston tradition — has his pedal to the metal while edging within inches of the bumper in front of him before leaning on his horn and cursing out the window. I’m the rare driver in this death race who maintains a few car-lengths’ distance in front and keeps all ten white knuckles on the wheel.
Suddenly, about 300 yards shy of the dread South Station tunnel, I notice something “unusual.” Inexplicably, out of nowhere, directly on the lane in front of me, a brand new, plastic-wrapped, queen-size Perfect Sleeper box spring.
Dead ahead. A box spring.
I’d never seen a box spring on the highway before, queen, king or otherwise. In the 30-odd years since then, I’ve never seen another. I know, in my heart, that I never will.
To my left, a guard rail. On the right, a 26-ton straight truck matching my speed. A few scant millimeters behind, a suburbanite in a Buick lighting a cigarette and chuckling over “Car Talk” on WGBH. I have about three seconds. If I swerve, I hit the truck. If I brake, even lightly, I launch a chain of collisions that could entangle ten or twelve cars in both lanes.
So, to my wife’s alarm (she’s right beside me, a few feet from that immovable truck), I make an unpatterned, non-intuitive choice. I accelerate.
My car in those days was a ’73 Plymouth Duster with a TorqueFlite transmission and, under the hood, that legendary 225-cubic inch slant-six. We’re close to 70 when we hit the box spring like a rhino attacking a china cabinet. There’s a bumpety-bump and a crunchy sound, after which I look into the rear-view mirror. All I can see is a cloud of Perfect Sleeper debris, chunks of fluff, wooden shards, springs flying every which way, bouncing off the startled Buick. An amazing sight. The box spring had, for all practical purposes, vanished. Abracadabra!
The Duster was unscathed. My wife stopped trembling in less than a half-hour. And I had done something that no “smart” machine could have done. With the safety of countless other humans (a term I apply loosely to Boston drivers) in the balance, I had faced the unexpected and made an unpredictable, spur-of-the-second, seemingly dumb choice. Which worked, probably because my intelligence is the real thing.
Not to mention that it’s free!