Google decided after an employee backlash this summer that it no longer wanted to help the U.S. military craft artificial intelligence to help analyze drone footage. Now, the military is inviting companies and researchers across the country to become more involved in machine learning. The firms should accept the invitation.
The Defense Department’s Defense Advanced Research Projects Agency will invest up to $2 billion over the next five years in artificial intelligence, a significant increase for the bureau whose goal is promoting innovative research. The influx suggests the United States is preparing to start sprinting in an arms race against China. It gives companies and researchers who want to see a safer world an opportunity not only to contribute to national security but also to ensure a more ethical future for AI.
The DARPA contracts will focus on helping machines operate in complex real-world scenarios. They will also tackle one of the central conundrums in AI: something insiders like to call “explainability.” Right now, what motivates the results that algorithms return and the decisions they make is something of a black box. That’s worrying enough when it comes to policing posts on a social media site, but it is far scarier when lives are at stake. Military commanders are more likely to trust artificial intelligence if they know what it is “thinking,” and the better any of us understands technology, the more responsibly we can use it.
There is a strong defense imperative to make AI the best it can be, whether to deter other countries from using their own machine-learning capabilities to target the United States, or to ensure the United States can effectively counter them when they do. Smarter technologies, such as improved target recognition, can save civilian lives, and allowing machines to perform some tasks instead of humans can protect service members.
But patriotism is not the only reason companies should want to participate. They know better than most in government the potential these technologies have to help and to harm, and they can leverage that knowledge to maximize the former and minimize the latter. Because DARPA contracts are public, the work researchers do will be transparent in a way that Project Maven, the program that caused so much controversy at Google, was not. Employees aware of what their companies are working on can exert influence over how those innovations are used, and the public can chime in as well.
DARPA contractors will probably develop products with nonlethal applications, like improved self-driving cars for convoys and autopilot programs for aircraft. But the killer robots that have many people worried are not outside the realm of technological possibility. The future of AI will require outlining principles that explain how what is possible may differ from what is right. If the best minds refuse to contribute, worse ones will.
This editorial first appeared in The Washington Post.