Monthly Archives: July 2011

Prevent Skynet? We are Skynet.

Hollywood can’t scare me.  I don’t lose sleep to crimson-eyed androids and their sentient overminds.  But don’t think for a moment that I’m well-rested.

I lose sleep at night because the more I learn about our modern world, the more I see that nobody can hope to understand it.  We’ve become too interconnected and interdependent to even count all the threads that bind us.  Our software and business systems are similarly interwoven, both with themselves and with us.  In my nocturnal meditations, our civilization has now begun to resemble the electronic devices on which it increasingly depends.

I have devised an amusing way to estimate the fragility of a civilization:  Picture a stereotypical member of that civilization, holding, in one hand, a stereotypical tool of upon which said civilization depends.  Now, picture that tool being dropped onto a hard surface.  Calculate the odds of that tool still functioning afterwards.

Up until now, few civilizations have had reason to worry.  In my scrolling diorama I see a hunter-gatherer dropping a spear, and a primitive agrarian dropping a scythe.  No drama there.  I see plowshares, hammers, whips, even the occasional abacus bouncing back with nary a scratch.  Scrolls and books? No problem.  Ditto wrenches and sliderules.  I don’t start getting nervous until Rosie the Riveter makes a cameo with her pneumatic rivet gun, but she’s back in business before you can say “steel-toed boot.”  The wincing begins in earnest, however, when laptops and smartphones, the tools of today’s “knowledge workers”, start hitting the pavement.

Modern civilization, for all the good it brings, should expect to flunk any serious drop test.  A natural disaster of truly global proportions could kill billions in a world no longer able to fully feed itself in the absence of electricity.  The truly scary upshot of my metaphor, though, is that laptops and smartphones don’t even wait to be dropped before they stop working.  They crash all on their own.

As a computer user, you’re no stranger to the phenomenon.  Your software components occasionally interact in unanticipated ways that lock them into an unrecoverable error state.  You may never know what causes any particular meltdown, but your screen freezes, goes blue, or does something else it shouldn’t do. Let’s be clear: Your system isn’t malevolent.  There’s no cackling, dancing skulls, or killer robots.  Your ends are simply terminated.

Could our civilization fail this way? Consider the recent meltdown in the financial industry.  Extremely sophisticated business arrangements intended to manage risk did just the opposite.  Most of the people who were a part of this system were making entirely rational decisions within their scope of operations.  Very few of these human circuits fully understood the deals they were making.  They did not know where the liabilities would land in a crisis.  Those who sounded the alarm, of course, had too little influence to prevent the inevitable.  So we crashed.  The resulting recession today has fatal repercussions to the multitudes who lived, and now die, at the margins of our shrinking global economy.

According to the poetry of Robert Frost, “Some say the world will end in fire.  Some say in ice.”  I say it will end in a hard system crash.  The likely lack of killer robots at this time will do nothing to lessen the tragedy.

Our broad civilizational course is already set.  We will become increasingly dependent on systems of escalating complexity.  These systems will interact more frequently in ways we can not predict, understand, or prevent. Nuclear holocaust or killer robots might or might not be part of the story.  But in the hard crash of my nightmares, reboot will be impossible, because we will have become like the electrons in a microprocessor.   Will we even know if we’ve been shunted into an infinite loop or out of the chip entirely?

What is to be done?

Unlike certain Hollywood heroes and villains, we won’t be able to go back in time after we’ve found ourselves in checkmate.  We must win the game now, while positive outcomes are still possible.  It’s hard, because we don’t understand the kind of chess we’re playing, and the rules keep changing.

Just as only light can dispel darkness, only intelligence can dispel confusion.  Complexity must be made comprehensible.  Brains must meet bafflement in battle, and brains must win.

Humans, alas, aren’t getting fundamentally smarter.  We can’t seem to keep on top of own progress, and we’re falling farther behind every day.

I therefore submit, in an irony Hollywood would understand perfectly, that the solution may well be artificial intelligence.

No, I’m not talking about fighting killer robots with killer robots.  I’m talking about leveling the playing field between moral humanity and amoral complexity.  I’m proposing, as others have done, that we figure out how to make greater intelligence possible in the near term, and figure out how to keep it on our side. Our goals must not be misinterpreted.  Our morals must be its morals.

Let’s not kid ourselves.  Deliberate creation of artificial intelligence carries enormous risks.  These risks must be discovered and mitigated with all  the zeal of a civilization facing its own extinction.

Still, in the confounding complexities of our own future, artificial intelligence gone amok is just one among many perils.  And, done right, it might just be our salvation.