In July 2015, two founders of DeepMind, a division of Alphabet with a reputation for pushing the boundaries of artificial intelligence, were among the first to sign an open letter urging the world’s governments to ban work on lethal AI weapons. Notable signatories included Stephen Hawking, Elon Musk, and Jack Dorsey.
Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.
The episode reveals DeepMind caught between two conflicting desires. The company doesn’t want its technology used to kill people. On the other hand, publishing research and source code helps advance the field of AI and lets others build upon its results. But that also allows others to use and adapt the code for their own purposes.
A DeepMind spokesperson says society needs to debate what is acceptable when it comes to AI weapons. “The establishment of shared norms around responsible use of AI is crucial,” she says. DeepMind has a team that assesses the potential impacts of its research, and the company does not always release the code behind its advances. “We take a thoughtful and responsible approach to what we publish,” the spokesperson adds.
The AlphaDogfight contest, coordinated by the Defense Advanced Research Projects Agency (Darpa), shows the potential for AI to take on mission-critical military tasks that were once exclusively done by humans. It might be impossible to write a conventional computer program with the skill and adaptability of a trained fighter pilot, but an AI program can acquire such abilities through machine learning.
Supersmart algorithms won’t take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.
“The technology is developing much faster than the military-political discussion is going,” says Max Tegmark, a professor at MIT and cofounder of the Future of Life Institute, the organization behind the 2015 letter opposing AI weapons.
The US and other countries are rushing to embrace the technology before adversaries can, and some experts say it will be difficult to prevent nations from crossing the line to full autonomy. It may also prove challenging for AI researchers to balance the principles of open scientific research with potential military uses of their ideas and code.
Without an international agreement restricting the development of lethal AI weapons systems, Tegmark says, America’s adversaries are free to develop AI systems that can kill. “We’re heading now, by default, to the worst possible outcome,” he says.
US military leaders—and the organizers of the AlphaDogfight contest—say they have no desire to let machines make life-and-death decisions on the battlefield. The Pentagon has long resisted giving automated systems the ability to decide when to fire on a target independent of human control, and a Department of Defense Directive explicitly requires human oversight of autonomous weapons systems.
But the dogfight contest shows a technological trajectory that may make it difficult to limit the capabilities of autonomous weapons systems in practice. An aircraft controlled by an algorithm can operate with speed and precision that exceeds even the most elite top-gun pilot. Such technology may end up in swarms of autonomous aircraft. The only way to defend against such systems would be to use autonomous weapons that operate at similar speed.
“One wonders if the vision of a rapid, overwhelming, swarm-like robotics technology is really consistent with a human being in the loop,” says Ryan Calo, a professor at the University of Washington. “There’s tension between meaningful human control and some of the advantages that artificial intelligence confers in military conflicts.”
AI is moving quickly into the military arena. The Pentagon has courted tech companies and engineers in recent years, aware that the latest advances are more likely to come from Silicon Valley than from conventional defense contractors. This has produced controversy, most notably when employees of Google, another Alphabet company, protested an Air Force contract to provide AI for analyzing aerial imagery. But AI concepts and tools that are released openly can also be repurposed for military ends.
DeepMind released details and code for a groundbreaking AI algorithm only a few months before the anti-AI weapons letter was issued in 2015. The algorithm used a technique called reinforcement learning to play a range of Atari video games with superhuman skill. It attains expertise through repeated experimentation, gradually learning what maneuvers lead to higher scores. Several companies participating in AlphaDogfight used the same idea.
DeepMind has released other code with potential military applications. In January 2019, the company released details of a reinforcement learning algorithm capable of playing StarCraft II, a sprawling space strategy game. Another Darpa project called Gamebreaker encourages entrants to generate new AI war-game strategies using Starcraft II and other games.
Other companies and research labs have produced ideas and tools that may be harnessed for military AI. A reinforcement learning technique released in 2017 by OpenAI, another AI company, inspired the design of several of the agents involved with AlphaDogfight. OpenAI was founded by Silicon Valley luminaries including Musk and Sam Altman to “avoid enabling uses of AI … that harm humanity,” and the company has contributed to research highlighting the dangers of AI weapons. OpenAI declined to comment.
Some AI researchers feel they are simply developing general-purpose tools. But others are increasingly worried about how their research may end up being used.
“At the moment I’m deep in a crossroads in my career, trying to figure out whether ML can do more good than bad,” says Julien Cornebise, as associate professor at University College London who previously worked at DeepMind and ElementAI, a Canadian AI firm.
Cornebise also worked on a project with Amnesty International that used AI to detect destroyed villages from the Darfur conflict using on satellite imagery. He and the other researchers involved chose not to release their code for fear that it could be used to target vulnerable villages.
Calo of the University of Washington says it will be increasingly important for companies to be upfront with their own researchers about how their code might be released. “They need to have the capacity to opt out of projects that offend their sensibilities,” he says.
It may prove difficult to deploy the algorithms used in the Darpa contest in real aircraft, since the simulated environment is so much simpler. There is also still much to be said for a human pilot’s ability to understand context and apply common sense when faced with a new challenge.
Still, the death match showed the potential of AI. After many rounds of virtual combat, the AlphaDogfight contest was won by Heron Systems, a small AI-focused defense company based in California. Heron developed its own reinforcement learning algorithm from scratch.
In the final matchup, a US Air Force fighter pilot with the call sign “Banger” engaged with Heron’s program using a VR headset and a set of controls similar to those inside a real F-16.
In the first battle, Banger banked aggressively in an attempt to bring his adversary into sight and range. But the simulated enemy turned just as fast, and the two planes became locked in a downward spiral, each trying to zero in on the other. After a few turns, Banger’s opponent timed a long-distance shot perfectly, and Banger’s F-16 was hit and destroyed. Four more dogfights between the two opponents ended roughly the same way.
Brett Darcey, vice president of Heron, says his company hopes the technology eventually finds its way into real military hardware. But he also thinks the ethics of such systems are worth discussing. “I would love to live in a world where we have a polite discussion over whether or not the system should ever exist,” he says. “If the United States doesn’t adopt these technologies somebody else will.”
More Great WIRED Stories
- The furious hunt for the MAGA bomber
- How Bloomberg’s digital army is still fighting for Democrats
- Tips to make remote learning work for your children
- “Real” programming is an elitist myth
- AI magic makes century-old films look new
- ?️ Listen to Get WIRED, our new podcast about how the future is realized. Catch the latest episodes and subscribe to the ? newsletter to keep up with all our shows
- ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers