Will algorithms commit war crimes?

--

Today, algorithms may come in charming shapes, such as Sophia, a robot with a lovely attitude and an enlightened philosophy.

Others, like Atlas, are being built to look like Robocop brutes that can run, jump, and, maybe, shoot. Why not?

Regardless of how we, civilians, feel about it, Artificial Intelligence (AI) has entered the armaments industry. The world is testing electronic command and training systems, object recognition techniques, and drone management algorithms which provide the military with millions of photographs and other valuable data. Already, the decision to use an offensive weapon frequently is made by a machine, with humans left to decide only whether or not to pull the trigger. In the case of defensive weapons, machines often make autonomous decisions (to use the defensive systems) without any human involvement at all.

Trending AI Articles:

1. TensorFlow Object Detection API tutorial

2. Keras Cheat Sheet: Neural Networks in Python

3. Google will beat Apple at its own game with superior AI

4. TensorFlow Object Detection API: basics of detection (2/2)

Which is scary. Will AI weapons soon be able to launch military operations independently of human input?

It’s clear that the military is developing smart technologies. After all, we owe many of the innovations we know from civilian applications to military R&D, including the internet itself (which began as Arpanet), email, and autonomous vehicles — all developed by the U.S. Defense Advanced Research Projects Agency (DARPA). But today modern weaponry relying on machine and deep learning can achieve a worrisome autonomy, although the military officially claims that no contemporary armaments are fully autonomous. However, it does admit that a growing proportion of arsenals meet the technological criteria for becoming fully autonomous. In other words, it’s not a question of if weapons will be able to act without human supervision, it’s a matter of when, and whether we allow them to choose targets to attack and carry those attacks out.

There is another consideration here worth noting. While systems still are designed to leave the final decision to human beings, the reaction time required time is frequently so short once the weapon has analyzed the data and chosen a target that it precludes reflection. With a half a second to decide whether or not to pull the trigger, it is difficult to speak of humans themselves in those situations as being fully autonomous.

Thinking weapons around the world

Human Rights Watch, which has called for a ban on “killer robots,” has estimated that there are at least 380 types of military equipment that employ sophisticated smart technology operating in China, Russia, France, Israel, the UK, and the United States. Much publicity has recently focused on the company Hanwha, a member of a group of the largest weapon manufacturers in South Korea. The Korea Times, calling it the “third revolution in the battleground after gunpowder and nuclear weapons,”has reported that together with the Korean Advanced Science and Technology Institute (KAIST), it is developing missiles that can control their speed and altitude and change course without direct human intervention. In another example, SGR-A1 cannons placed along the demilitarized zone between South and North Koreas reportedly are capable of operating autonomously (although programmers say they can’t fire without human authorization).

The Korean company Dodaam Systems makes autonomous robots capable of detecting targets many kilometers away. Also, the UK has been intensively testing the unmanned Taranis drone, set to reach its full capacity in 2030 and replace human-operated aircraft. Last year, the Russian government’s Tass news agency reported that Russian combat aircraft will soon be fitted with autonomous missiles capable of analyzing a situation and making independent decisions regarding altitude, velocity, and flight direction. And China, which aspires to become a leader in the AI field, is working hard to develop drones (especially those operating in so-called swarms) capable of carrying autonomous missiles that detect targets independent of humans.

A new bullet

Since 2016, the U.S. Department of Defense has been creating an artificial intelligence development center. According to the program’s leaders, progress in the field will change the way wars are fought. Although former U.S. Deputy Secretary of Defense Robert O. Work has claimed that the military will not hand power over to machines, if other militaries do, the United States may be forced to consider it. For now, the agency has established a broad, multi-billion-dollar AI development program as core to its strategy, testing state-of-the-art remotely-controlled equipment, such as the Extreme Accuracy Tasked Ordnance (EXACTO), a .50 caliber bullet that can acquire targets and change path “to compensate for any factors that may drive it off course.”

According to experts, unmanned aircraft will replace piloted aircraft within a matter of years. These drones can be refueled in flight, carry out missions against anti-aircraft forces, engage in reconnaissance missions, and attack ground targets. Going pilotless will reduce costs considerably as pilot safety systems in a modern fighter aircraft may add up to as much as 25% of the whole combat platform.

Small, but deadly

Work is currently under way to tap into the potential of so-called insect robots, a specific form of nanobot that according to the American physicist Louis Del Monte, author of the book Nanoweapons: A Growing Threat To Humanity, may become weapons of mass destruction. Del Monte argues that insect-like nanobots can be programmed to insert toxins into people and poison water-supply systems. DARPA’s Fast Lightweight Autonomy programinvolves the development of house-fly-sized drones ideal for spying, equipped with “advanced autonomy algorithms.” France, the Netherlands, and Israel reportedly are also working on intelligence gathering insect drones.

The limits of NGO monitoring and what needs to happen now

Politicians, experts, and the IT industry as a whole are realizing that the autonomous weapons problem is quite real. According to Mary Wareham of Human Right Watch, the United States should “commit to negotiate a legally binding ban treaty [to]… draw the boundaries of future autonomy in weapon systems.” Meanwhile, the UK-based NGO Article 36 has devoted a lot of attention to autonomous ordnance, claiming that political control over weapons should be regulated and based on a publicly-accessible and transparent protocol. Both organizations have been putting a lot of effort into developing clear definitions of autonomous weapons. The signatories of international petitions continue to attempt to reach politicians and present their points of view during international conferences. One of the most recent international initiatives is this year’s letter signed by the Boston-based organization Future of Life Institute in which 160 companies from the AI industry in 36 countries, along with 2400 individuals, have signed a declaration stating that “autonomous weapons pose a clear threat to every country in the world and will therefore refrain from contributing to its development.” The document was signed by, among others, Demis Hassabis, Stuart Russell, Yoshua Bengio, Anca Dragan, Toby Walsh, and the founder of Tesla and SpaceX Elon Musk.

However, until an open international conflict arises that will reveal what technologies are actually in use, keeping track of the weapons that are being developed, researched and installed is next to impossible.

Another obstacle faced in developing clear binding standards and producing useful findings is the nature of algorithms. We think of weapons as material objects (whose use may or may not be banned), but it is much harder to make laws to cope with the development of the code behind software, algorithms, neural networks and AI.

Another problem is that of accountability. As is the case with self-driving vehicles, who should be held accountable should tragedy strike? The IT person who writes the code to allow devices to make independent choices? The neural network trainer? The vehicle manufacturer?

Military professionals who lobby for the most advanced autonomous projects argue that instead of imposing bans, one should encourage innovations that will reduce the number of civilian casualties. This is not about algorithms having the potential to destroy the enemy and civilian population. Rather, they claim, their prime objective is to use these technologies to better assess battlefield situations, find tactical advantages, and reduce overall casualties (including civilian). In other words, it is about improved and more efficient data processing.

However, the algorithms unleashed in tomorrow’s battlefields may cause tragedies of unprecedented proportions. Toby Walsh, a professor dealing with AI at the University of New South Wales in Australia warns that autonomous weapons will “follow any orders however evil” and “industrialize war.”

Artificial intelligence has the potential to help a great many people. Regrettably, it also has the potential to do great harm. Politicians and generals need to collect enough information to understand all the consequences of the spread of autonomous ordnance.

Works cited

YouTube, BrainBar, My Greatest Weakness is Curiosity: Sophia the Robot at Brain Bar, link, 2018.

YouTube, Boston Dynamics, Getting some air, Atlas?, link, 2018.

The Guardian, Ben Tarnoff, Weaponised AI is coming. Are algorithmic forever wars our future?, link, 2018.

Brookings, Michael E. O’Hanlon,Forecasting change in military technology, 2020–2040, link, 2018.

Russell Christian/Human Rights Watch, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots, link, 2018.

The Korea Times, Jun Ji-hye,Hanwha, KAIST to develop AI weapons, link, 2018.

Bae Systems, Taranis, link, 2018.

DARPA, Faster, Lighter, Smarter: DARPA Gives Small Autonomous Systems a Tech Boost, Researchers demo latest quadcopter software to navigate simulated urban environments, performing real-world tasks without human assistance,link, 2018.

The Verge, Matt Stroud, The Pentagon is getting serious about AI weapons, link, 2018.

The Guardian, Mattha Busby, Killer robots: pressure builds for ban as governments meet, link, 2018.

--

--