You are not logged in. LOG IN NOW >

Putting the International Spotlight on Killer Robots

BY Carola Frediani | Tuesday, December 3 2013

Campaigning in London to create a worldwide ban on killer robots (image: Stop Killer Robots/flickr)

Imagine an unmanned robot surveying enemy land and deciding, based on algorithms rather than human control, when it should and shouldn't drop a bomb or release a cascade of bullets. These "killer robots," once a topic restricted to an elite group of scientists, military analysts and visionary science writers has now reached a global audience through the Campaign to Stop Killer Robots, a movement that, very strikingly, seeks to preemptively ban them. Most weapon bans are reactive, taking place after it has exacted a massive toll.

On November 15, the United Nations agreed to discuss a global ban on killer robots, also known as fully autonomous weapons, at a meeting of the Convention on Conventional Weapons (CCW) in Geneva. It is the first step towards creating an international consensus on outlawing all robotic weapons systems that can operate on the battlefield without human control.

But what exactly is a killer robot? Forget Schwarzenegger-Terminator. Think instead of data-crunching computers. “Killer robots are machines to which we give the ability to make a life or death decision,” Daniel Suarez explained to techPresident. He is a former information technology consultant turned bestselling author whose latest thriller, Kill Decision, depicts with frightening accuracy what warfare looks like with fully autonomous weapons. “For instance, a predator drone still has a human operator, even if you know he is thousands of miles away from the weapon. There's a human being who is pulling the trigger. Killer robots are about algorithmic killing. They are machines using sensors and algorithms to determine when to open fire.” In October 2012, he gave the keynote speech at New York University School of Law's Drone & Aerial Robotics Conference and explained how "increasingly, war is a data-driven enterprise."

Daniel Suarez giving a keynote speech at New York University School of Law's Drone & Aerial Robotics Conference. "Increasingly, war is a data-driven enterprise," he said. (image: Screenshot of the conference on Youtube)

Nobel Peace laureate Jody Williams, of the Nobel Women’s Initiative, who also spearheaded the International Campaign to Ban Landmines (ICBL), thinks very much along the same lines. “Of course many have big issues about the morality and ethics of drones but at least in that case there's a human being who decides to kill another human being. With completely autonomous weapons there would be no human being, at all,” she wrote to techPresident in an email. “Countries are working to create machines that can make a decision to kill me, to kill you, to kill everybody... I think this is much more ethically and morally appalling, and so we took the decision to create the new 'Stop Killer Robots' campaign in order to stop this, and to inform people and even members of governments of what's going on. Because very few know about the situation.”

Williams, who in 1997 won the Nobel Peace Prize for her work on ICBL, is now at the forefront of the global struggle to ban killer robots. The Campaign to Stop Killer Robots, a global coalition of 45 non-governmental organizations in 22 countries, calls for a preemptive and comprehensive ban on the development, production, and use of fully autonomous weapons. “We are far more strong as a civil society working for disarmament because we learned to work together through the experience of campaigns such as ICBL. And we learned also that the threat of arms proliferation -- particularly in regards to problematic weapons such as landmines, cluster bombs, nuclear weapons and now killer robots -- is a global one. We have to face it globally and not be happy with small national actions,” explained Williams.

According to a UN report, several nations with high-tech military capabilities, including China, Israel, Russia, the United Kingdom and the United States, are moving toward fully autonomous weapons. And as soon as one or more chooses to deploy them, a robotic arms race could take place.

“A skilled roboticist can put together a killer robot in a very short time,” said Suarez, who in his book Kill Decision describes a fictional scenario where automated drones identify enemies and make the decision to kill them. “So far what constrains them is that people are concerned about making errors, but from a technological point of view there’s no impediment from building them. There are already machines capable of lethal decisions. For instance, along the South Korea border, sniper stations have been deployed. There’s still a human being in the loop, but they could act independently.”

South Korea is using the Samsung Techwin SGR1 to patrol the Demilitarized Zone along the border with North Korea. It is a robot equipped with machine guns that uses infrared sensors to detect targets from up to two miles away. So far the machine has been under human control, simply alerting a command center if it spots a trespasser, but it has an automatic mode. It is no large stretch of the imagination to see how the human factor could be taken out of the picture.

Many other weapons systems are rapidly going towards full autonomy. The outstanding one: the Phalanx Close-In Weapon System, produced by Raytheon and deployed by the US Navy, is a rapid-fire, computer-controlled, radar-guided gun system that “automatically carries out functions usually performed by multiple systems - including search, detection, threat evaluation, tracking, engagement, and kill assessment.”

The US Navy test-fires the Phalanx close-in weapons system from the command ship USS Blue Ridge (image: Wikimedia)

At the same time India is already working to develop robotic soldiers to be deployed in difficult warfare zones, like the Line of Control (LoC), the boundary that divides Kashmir into two regions, with one administered by India and one by Pakistan. Israel's unmanned aerial weapon Harpy, known as "fire and forget," already operates independently as well; it detects unfriendly air defense radar systems and is programmed to shoot them down automatically. Israel has already sold these devices to China, India and Turkey, among others.

An IAI Harpy antiradar loitering weapon at the 2007 International Paris Air Show (image: Wikimedia)

Aside from the obvious ethical issues of leaving life and death decisions to robots, lethal autonomous weapons may have important geopolitical implications as well such as who is held responsible for an attack. “What if everyone had them?” asks Suarez. “If there were machines that are almost impossible to trace back to their owners, you wouldn't even know who’s attacking you. Right now international stability is possible because countries are responsible for their actions.”

There are in fact many similarities between killer robots and cyber warfare, a domain where it is often difficult to backtrack and attribute attacks. “Cyber warfare is very cheap and relatively anonymous and highly scalable because you don’t need too many people to conduct it. It allows nations and other international actors to cost-effectively project their power across borders. Autonomous drones would follow the same trajectory except that the attacks would be carried out on real people in the real world, so it would be a physical incarnation of cyberwar," explains Suarez. That would also concentrate decision-making and military power into a very few unseen hands. “And this is the type of thing we need to avoid.”

That's why a number of human rights organizations decided it was time to come up with a legal framework to ban them. “A ban won’t make them completely disappear, but the countries that will deploy them will become outcast. They will not be within the standards of the international community,” says Suarez.

Even before the campaign's official beginning, NGOs and scientists were already working on bringing the subject to public attention. The campaign's success was mainly based on old-style activist lobbying: engaging directly with governments and their representatives, giving public talks, writing reports and documents, making alliances with different groups, and leveraging the concern of governments who were already sensitive to the topic. The campaign also established a presence online, asking supporters to spread the word through artwork, banners, stickers and short films.

The first call for a preemptive ban came one year ago through a report from Human Rights Watch and the Harvard Law School International Human Rights Clinic entitled “Losing Humanity: The Case against Killer Robots.” Then, in April 2013, the Campaign to Stop Killer Robots kicked in, making a call to action and releasing a number of statements to educate the public on killer robots.

Jody Williams in May 2010 (image: Wikimedia)

Endorsed by well-known figures such as Williams, campaigners started asking for support on the web and in international venues, finding key allies like scientists, for instance. In 2009, concerned researchers had already formed the International Committee for Robot Arms Control (ICRAC); by last month over 270 engineers, computing and artificial intelligence experts, and roboticists had signed for a ban on Lethal Autonomous Robotics (LARs).

The campaign also struck a cord among the general public; according to a survey conducted by the University of Massachusetts, only ten percent of Americans firmly support the development of killer robots. Meanwhile, the strongest resistance to banning killer robots comes from those on the extreme ends of the political spectrum (the far right and the far left), as well as those who were highly educated, and, most revealingly, members of the military.

“I think there have been three major breakthroughs,” Mary Wareham of Human Rights Watch and coordinator of the Campaign to Stop Killer Robots told techPresident. “First, the Losing Humanity report got much attention: people realized the issue was a concern.” A few days after the publication, on November 2012, the US Department of Defense issued its first-ever policy on autonomous weapons systems, requiring that a human always be “in-the-loop” when using lethal force. Then, just after the launch of the campaign, Christof Heyns, the United Nations special rapporteur on extrajudicial, summary or arbitrary executions, released a report calling for a global moratorium on such weapons.

At the end of May, the UN convened in Geneva to discuss the problems that have arisen as a result of lethal autonomous weapons technology. At the time, Heyns, had explained, “War without reflection is mechanical slaughter." Finally, on November 15, governments attending the UN Convention on Conventional Weapons (CCW) – an international committee to discuss issues such as the use of chemical gases and landmines - agreed to convene in Geneva in May 2014 to begin international discussions on lethal autonomous weapons systems.

The international discussion around banning killer robots is just the beginning of a process that could conclude in a treaty banning these weapons. Prohibiting fully autonomous weapons could become the next Convention on Conventional Weapons protocol. “They agreed to address the topic under this umbrella treaty. It doesn't happen very often that the CCW adds new works to its agenda and when it does it often results in a new international law,” says Wareham. “We are excited there has been an acknowledgement that killer robots are an issue that needs to be addressed. Last year no one was talking about it.”

Carola Frediani is an Italian journalist and co-founder of the media agency, She writes on new technology, digital culture and hacking for a variety of Italian publications, including L’Espresso,, Corriere della Sera, She is the author of Inside Anonymous: A Journey into the World of Cyberactivism.

Personal Democracy Media is grateful to the Omidyar Network and the UN Foundation for their generous support of techPresident's WeGov section.

For a round-up of our weekly stories, subscribe to the WeGov mailing list.