When Robots Can Decide Whether You Live or Die

Computers have gotten pretty good at making certain decisions for themselves. Automatic spam filters block most unwanted email. Some US clinics use artificial-intelligence-powered cameras to flag diabetes patients at risk of blindness. But can a machine ever be trusted to decide whether to kill a human being?

It’s a question taken up by the eighth episode of the Sleepwalkers podcast, which examines the AI revolution. Recent, rapid growth in the power of AI technology is causing some military experts to worry about a new generation of lethal weapons capable of independent and often opaque actions.

“We’re moving to a world where machines may be making some of the most important decisions on the battlefield about who lives and dies,” says Paul Scharre, director of the technology and national security program at the Center for a New American Security, a bipartisan think tank.

It may seem shocking to imagine machines that decide when to deploy lethal force, but Scharre says that Rubicon has effectively already been crossed. Israel’s Harpy drone, which has been sold to China, India, and South Korea, can automatically search out enemy radars and attack them without human permission.

Battlefield machines that can make decisions for themselves are poised to become more common, because the Pentagon and the militaries of rival superpowers like China and Russia have all placed artificial intelligence at the center of their strategies for future conflicts.

Arati Prabhakar has helped fuel the Pentagon’s AI interest—she used to be head of its research agency Darpa. She’s also acutely aware of the limitations of existing AI technology, such as the fact that it can’t explain its decisions the way a person can.

Prabhakar tells how Stanford researchers developed software to describe the content of images. In testing, the software displayed impressive accuracy, but when asked to interpret a photo of a baby holding an electric toothbrush, it saw a small boy with a baseball bat.

“When you look inside to say ‘Well what went wrong there?’ they’re really opaque,” Prabhakar says of such image-recognition algorithms. That’s a much bigger problem if you’re relying on the technology to decide who or what to point lethal weapons at.

Such difficulties have made some people working on AI more wary of the ethical consequences of what they build. “We have the enormous privilege that we get to work on powerful technologies that can shape the progress of our societies—that comes with the responsibility to ask what could possibly go wrong,” Prabhakar says.

War is an unpredictable business, so engineers are unlikely to foresee all the possible ways that military AI systems could go wrong. Richard Danzig, a former secretary of the Navy, says that new forms of international cooperation are needed to rein in AI risks—just as prior military innovations like landmines and nuclear weapons led to new treaties. “We need a common understanding about how to reduce these risks,” he says. “Then we need some joint planning for the contingency that these do escape.”


More Great WIRED Stories

Read More