Technology

#Why it doesn’t make sense to ban autonomous weapons

Table of Contents

#Why it doesn’t make sense to ban autonomous weapons

In May 2019, the Defense Advanced Research Projects Agency (DARPA) declared, “No AI currently exists that can outduel a human strapped into a fighter jet in a high-speed, high-G dogfight.”

Fast forward to August 2020, which saw an AI built by Heron Systems flawlessly beat top fighter pilots 5 to 0 at DARPA’s AlphaDogFight Trials. Time and time again Heron’s AI outmaneuvered human pilots as it pushed the boundaries of g-forces with unconventional tactics, lightning-fast decision-making, and deadly accuracy.

The former US Defense Secretary Mark Esper announced in September that the Air Combat Evolution (ACE) Program will deliver AI to the cockpit by 2024. They are very clear that the goal is to “assist” pilots rather than to “replace” them. It is difficult to imagine, however, in the heat of battle against other AI-enabled platforms how a human could reliably be kept in the loop when humans are simply not fast enough.

On Tuesday, January 26, the National Security Commission on Artificial Intelligence met, recommending not to ban AI for such applications. In fact, Vice Chairman Robert Work stated that AI could make fewer mistakes than human counterparts. The Commission’s recommendations, which are expected to be delivered to Congress in March, are in direct opposition with The Campaign to Stop Killer Robots, a coalition of 30 countries and numerous non-governmental organizations which have been advocating against autonomous weapons since 2013.

There are seemingly plenty of sound reasons to support a ban on autonomous weapon systems, including destabilizing military advantage. The problem is AI development cannot be stopped. Unlike visible nuclear enrichment facilities and material restrictions, AI development is much less visible and thus nearly impossible to police. Further, the same AI advancements used to transform smart cities can easily be utilized to increase the effectiveness of military systems. In other words, this technology will be available to aggressively postured countries that will embrace it towards achieving military dominance whether we like it or not.

So, we know these AI systems are coming. We also know that no one can guarantee that humans remain in the loop in the heat of battle  —  and as Robert Work argues, we may not even want to. Whether seen as a deterrence model or fueling a security dilemma, the reality is that the AI arms race has already begun.

[Read: How Polestar is using blockchain to increase transparency]

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.” — Elon Musk

Like most technology innovations whose possible unintended consequences start to give us pause, the answer is almost never to ban but rather to ensure that its use is “acceptable” and “protected.” As Elon Musk suggests, we should indeed be very careful.

Acceptable use

Just like facial recognition, which is also under immense scrutiny with increased bans across the U.S., it is not the technology that is the problem — it is its acceptable use. We must define the circumstances where such systems can be used and where they cannot. For example, no modern-day police agency would ever get away with showing a victim a single suspect photograph and asking, “is this the person you saw?” It is similarly unacceptable to use facial recognition to blindly identify potential suspects (not to mention the bias of such technologies across different ethnicities, which goes well beyond AI training data limitations to the camera sensors themselves).

automated license plate reader