Is Artificial Intelligence a Threat?

In a Chronicle article (“Is Artificial Intelligence a Threat?“), the focus is on the thought experiments of Nick Bostrom, who directs the Future of Humanity Institute at the University of Oxford.

When the world ends, it may not be by fire or ice or an evil robot overlord. Our demise may come at the hands of a superintelligence that just wants more paper clips.

So says Nick Bostrom, a philosopher who founded and directs the Future of Humanity Institute, in the Oxford Martin School at the University of Oxford. He created the “paper-clip maximizer” thought experiment to expose flaws in how we conceive of superintelligence. We anthropomorphize such machines as particularly clever math nerds, says Bostrom, whose book Superintelligence: Paths, Dangers, Strategies was released in Britain in July and arrived stateside this month. Spurred by science fiction and pop culture, we assume that the main superintelligence-gone-wrong scenario features a hostile organization programming software to conquer the world. But those assumptions fundamentally misunderstand the nature of superintelligence: The dangers come not necessarily from evil motives, says Bostrom, but from a powerful, wholly nonhuman agent that lacks common sense.

Imagine a machine programmed with the seemingly harmless, and ethically neutral, goal of getting as many paper clips as possible. First it collects them. Then. realizing that it could get more clips if it were smarter, it tries to improve its own algorithm to maximize computing power and collecting abilities. Unrestrained, its power grows by leaps and bounds, until it will do anything to reach its goal: collect paper clips, yes, but also buy paper clips, steal paper clips, perhaps transform all of earth into a paper-clip factory. “Harmless” goal, bad programming, end of the human race…

Bostrom has surveyed his peers:

According to Bostrom, combined results from four surveys show that experts believe human-level machine intelligence will almost certainly be achieved within the next century. Although his own predictions are more cautious, some surveys predict a 50-percent chance of machines with human-level intelligence by 2040, and a 10-percent chance within the next decade. From that milestone, it’s not a far leap to superintelligence.

“Once you reach a certain level of machine intelligence, and the machine becomes clever enough, it can start to apply its intelligence to itself and improve itself,” says Bostrom, who calls the phenomenon “seed AI” or “recursively self-improving AI.”

If that intelligence happens within a matter of hours or days, in what is called a “hard takeoff,” people will be helpless in its wake, unable to anticipate what might happen next. It’s like the story of the genie who grants three wishes, but never quite in the way the wisher intends, says Russell, the computer scientist from Berkeley. “If what you have is a system that carries out your instructions to the letter, you’ve got to be extremely careful on what you state. Humans come with all kinds of common sense, but a superintelligence has none.”

As I’ve noted previously, what we’re talking about is the possibility of something like a nanotechnological weapon of mass destruction. Unfortunately, I fear that such a possibility in some variant form will happen. It could take on a myriad of forms: a deliberate, synthetic contamination of livestock or some other food supply chain; the creation of an immune-resistant bacteria or virus that wipes out plant, animal, or human life; etc. All that is required is:

  1. Sufficient Applied Science: Is it physically possible to create the grey goo? If yes, then check off this box.
  2. Availability of Knowledge Base: Is the know-how to build the grey goo widely available? The ill-founded decision to publish the genome of the 1918 flu virus is a harbinger. If the objective science allows it, and the Anarchist Cookbook is now available to on every muslim’s cellphone, we’re on our way…
  3. Economies of Scale: The production costs are a limiting parameter. But with the arc of capitalism one that lowers costs, and with Arabs sitting on sperm banks of oil, providing a medieval religious mindset with billions of dollars it’s culture would otherwise never possess, this is increasingly possible. 50 years from now, imagine a Kickstarter-type mechanism for eschatologically-oriented Muslims to fund the nanotechnological destruction of the West.
  4. Sufficient Will: Are there individuals, and even groups, with the will to initiate this destruction? Are there sufficient actors willing to intentionally destroy the world (or just the West for that matter?) What do you think?
This entry was posted in Nanotechnology, Science. Bookmark the permalink.