We Need a Strategy For Autonomous Weapons
AI policy and technology go hand in hand. That connection is especially true when it comes to autonomous weapons.
Artificial intelligence is too profound a development and too gigantic a shift to merely be a story about technology alone.
AI certainly represents the epitome of our technical prowess, but its implications are far-reaching and go significantly beyond what we would expect solely from a technological development.
In our paper, originally published by the US Naval Institute Proceedings magazine, Gen. John Allen and I made the case that the application of AI in the battlefield will give rise to what we call “hyperwar.”
In that same paper, we stressed that AI would not just drive changes in the way war is waged, but would have profound implications for our national economy, industry, and even social stability.
In November 2017, the United Nations Convention on Conventional Weapons (CCW) met in Geneva. Ambassadors and representatives from over 100 countries debated — not for the first time — what controls should be put in place to ward off the threat of autonomous weapons.
Unfortunately, the attendees could not even agree to a definition of what constitutes an autonomous weapon. With no consensus at hand, further discussions were pushed to next year.
Meanwhile, developments in Russia and in China indicate that artificial intelligence will make its way into weapon systems very quickly.
At the rate things are going, operational AI capabilities will likely be deployed in these countries prior to UN ambassadors even agreeing on a definition of what an autonomous weapon really is!
De Facto is Outpacing De Jure
At SparkCognition we have been arguing that bans provide little comfort for the last couple of years.
In my extensive national and international travels, I have personally met with four-star generals from NATO member states, leaders of allied nations, ambassadors, and policymakers. Many members of SparkCognition’s senior leadership team have collectively briefed senior officers in the Pentagon and high-level representatives from Allied states.
Why are we spending our time doing all this?
Because we understand what AI means to the world. We know that if we are to truly lead in the AI-powered world of the future, we cannot bury our heads in the sand. We cannot avoid the political, policy, economic, and ethical aspects of AI.
The nation must choose to not shy away from things that seem difficult. It is only in this way that we can make a meaningful difference. The time for policymakers, political leaders, and military commanders to understand artificial intelligence is now.
Securing the Future
If we want to secure future elections, prevent AI-powered mind hacking campaigns that create social instability, and ward off cyber-physical attacks against our infrastructure, then we must act now.
We must develop the policies that will keep us at the forefront of AI research. We must apply the benefits of this technology to improve efficiency, safety, and economic growth while protecting those impacted by automation.
The time to act is now.
We cannot hope that bans will protect us in the future. We cannot hope that even if a ban were in place, all signatories would respect it.
Hope is not a strategy. We need hard work and we need realism.