Killer Robots: Not Just Science Fiction Anymore

Gort from The Day the Earth Stood Still (1951). 20th Century Fox.

Note: On July 2, 2024, The New York Times independently verified that Ukrainian forces have successfully deployed fully autonomous armed drones against Russian targets.

No matter what generation you belong to, you don’t have to look far to find a childhood movie or television villain in the form of a menacing robot bent on mayhem. As villains go, they are as convenient and ubiquitous to science fiction as zombies to horror and orcs or goblins to fantasy. They can be as powerful or as wimpy as you need them to be, they can look like pretty much anything, and the audience isn’t going to be upset when you mow or hack them down. A complete list would be exhausting, but The Day the Earth Stood Still (1951), 2001: A Space Odyssey (1968), The Terminator (1984), I Robot (2001), and Avengers: Age of Ultron (2015) are all notable films that feature robots with varying degrees of autonomy as antagonists.

What about actual killer robots, though? As it happens, they are a popular topic of debate in the world of AI ethics. The idea that inevitably we would have autonomous robots on the battlefield or policing the streets has been around for a long time. The development of technologies such as image recognition, automated targeting and fire control systems, and autonomous vehicles have brought killer robots from purely imaginary to theoretical to possible. Many argue that just because it’s possible doesn’t mean we should be building them. Others argue that we should because they have the potential to keep soldiers out of harm’s way, reduce civilian casualties, and resolve conflicts more quickly.

Why are we taking the time to explore killer robots in an education blog? If you’re looking for an AI ethics topic that’s also a current event, killer robots can make for great classroom discussion or debate. While many AI controversies may be somewhat murky or ambiguous for some, people tend to have an easier time deciding whether or not they are in favor of giving guns to robots. This makes it a great starting point for wading into AI ethics. Additionally, the recent use of drones in conflicts around the globe have brought this topic to the forefront of many people’s minds.

Defining Killer Robots

What is a killer robot, anyway? Formally, killer robots are referred to as Lethal Autonomous Weapon Systems (LAWS). The terms are not interchangeable though; LAWS is an umbrella term that also applies to some non–AI embodied weapon systems. For example Israel Aerospace Industries’ Harpy is a fully autonomous drone that scans for radar signatures from guided anti-aircraft missile launchers. Developed in the 1980s, it does not use AI (at least the original versions did not); when a radar signature triggers its sensors, it attacks. The Harpy is also what is known as a “loitering munition,” which means that once launched, it flies in a set pattern for some length of time (in the case of the Harpy, up to nine hours).

Harvard Law professor and Human Rights Watch Senior Researcher Bonnie Docherty divides LAWS into three categories:

  • human-in-the-loop: LAWS can select targets and deliver force only with a human command.

  • human-on-the-loop: LAWS can select targets and deliver force under the oversight of a human operator who can override these actions.

  • human-out-of-the-loop: no human action is involved; LAWS is fully autonomous once deployed.

Killer robots would fall into the “human out of the loop” category, but some other LAWS do as well, such as the Harpy drone mentioned above. In simplest terms, killer robots are LAWS, but not every LAWS is a killer robot. For the purposes of this article, I am going to use the term “killer robot” to refer to a physical robot (stationary or mobile) which has the capability of being deployed autonomously, meaning that while it may be used with a human in or on the loop, it also has the capability of being deployed with humans out of the loop, where it uses AI to make its own targeting and fire control decisions.

What types of killer robots exist today?

The most common killer robots are unmanned aerial vehicles (UAVs), AKA drones. Most common are the quadcopter variety, although there are also fixed-wing versions. While there are many companies making UAVs all around the world, we are going to focus on the two that have reportedly been deployed as “human out of the loop” killer robots. Turkish manufacturer STM makes a UAV called the Kargu, which is listed on their website as having “multiple warhead options,” which typically includes some type of anti-personnel grenade or an explosive designed to destroy armored vehicles.  The Kargu has the distinction of being what is widely thought to be the first use of a killer robot against human targets. In Libya in 2020 during the ongoing conflict between the Libyan National Army (HAF) and the (now dissolved) Government of National Accord, GNA forces reportedly used Kargu UAVs against the HAF. A UN Security Council Report published after the fact stated:

“Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition.”

Many read this and assumed that the Kargu UAVs had attacked autonomously. However note the wording in the report: it states that they do not require a data connection, not that it was confirmed that they operated this way. 

The Kargu UAV, as pictured on the STM website

The Saker Scout, a Ukrainian quadcopter UAV, is similar to the Kargu. According to a company spokesman, the Saker was used in fully autonomous mode to carry out attacks against Russian targets in 2023, although this has not been verified independently. 

The aquatic versions of UAVs are known as UUVs (Unmanned Underwater Vehicles) and USVs (Unmanned Surface Vehicles). The US Navy received its first ORCA unmanned submarine in December 2023, which is designed to, among other things, lay mines. The US Navy also formed its first unit dedicated to USVs, USVDIV-1,  in May 2022. USDIV-1 recently (January 2024) concluded a five month long naval exercise testing their four USVs. Throughout the exercise they found the ships were making errors that required human “interventions of concern” on average once every 42 hours. One the one hand I am pretty impressed that a large robot ship can go 42 hours on the ocean by itself without any human help. On the other, the Navy definitely has a ways to go before cutting these things loose on their own.

The United States of course isn’t the only country investing in autonomous watercraft. In 2015, Russian President Vladimir Putin “accidentally” announced that Russian engineers were developing an autonomous nuclear torpedo: an underwater loitering munition which would be able to attack ships or coastlines. In early 2023 it was reported that the first of these torpedoes were built and would soon be delivered to a new submarine. These torpedoes are designed to be used against coastal cities as well as aircraft carrier battle groups.

There are also land-based autonomous robots. Most notably, Australian company GaardTech is now producing the Jaeger-Combat Autonomous Ground Robot, which can operate in two different modes: Chariot and Goliath. In Chariot mode it uses its attached weapon system (reported to be either a light machine gun or a sniper rifle) to engage human or other organic targets, and in Goliath mode it charges armored vehicles and self-destructs with a built-in explosive capable of disabling even heavily armored vehicles. It is about the size of a go kart, can move at speeds up to 50 mph, and in one test remained fully functional after being shot 50 times with a 7.62x51mm (standard light machine gun caliber). The Jaeger-CAGR is produced for the Australian military, and is also available for export purchase through the Australian Defence Catalogue.

Jaeger-CAGR. Photo: GaardTech

The Jaeger-CAGR has the capability to operate in a “swarm'' where a number of the robots work in concert. Generally swarming is associated with UAVs like the Kargu in the Turkey incident, but will work with land or aquatic robots also. In 2017 a US government research agency began working on a project to develop a swarm of 250 aerial and ground autonomous robots for conducting surveillance and mapping in urban warfare environments. In 2021 Israel deployed autonomous UAV swarms to detect rocket and mortar firing locations in the Gaza Strip. These UAVs were not used to attack the positions themselves, but as a battlefield surveillance tool, so while they contributed to the fight, they were not being used as killer robots.

There are also stationary autonomous systems. Most notably Israel and South Korea have deployed stationary sentry gun emplacements capable of autonomous operations, but they have also been used in some other middle eastern nations also. Reportedly, all current users of these technologies operate in a “human in the loop” configuration where the ultimate authority to fire is given by a human. Similarly many navies worldwide use these types of systems, but in a defensive mode to defend against attacking aircraft, missiles, or rockets.

The ethics debate

The real question when it comes to killer robots is of course Should we use them? The answer depends on who you ask. 

The United States government issued a paper to the GGE (Group of Government Experts) appointed by the Secretary General of the United Nations to study LAWS in 2018 titled Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems. The title more or less tells you what you need to know.

Proponents of killer robots make the argument that they have the potential to save lives. Why send soldiers over that hill to see what is on the other side when I can send a robot? If there’s nothing there, those soldiers just got all tensed up and ready to fight for nothing, and getting tensed up and ready to fight wears soldiers out, especially when they are doing it over and over. If there is something there, there’s a fight and those soldiers might get injured or killed. What if I could just send a UAV over the hill to take a peek? If there’s nothing there, all I’ve spent is some battery power. If there is something there, it drops a bomb or whatever and I’m that far ahead when the human to human fighting starts. 

As a former US Army Infantry Officer myself, I can certainly empathize with this argument. At the same time, I think living with the threat that, on top of all the other stressors of combat, at any moment a UAV might appear and drop a bomb on me is decidedly really bad. Unfortunately, choosing not to use killer robots does not mean your enemy is going to do the same. This is another common argument for developing and using killer robots: if we don’t have them, we put ourselves at a disadvantage. By using them, we’re at the very least leveling the playing field.

Furthermore, while developing, buying, and using up killer robots is expensive, they’re cheaper than human soldiers. Soldiers have to be recruited (or drafted, but either way you have to spend money to manage the process), trained, equipped, fed, provided a place to live, have access to health care, etc. Soldiers who are wounded may require extensive recovery time, or be provided disability payments for the rest of their lives. The long term costs of having human soldiers goes on and on. Killer robots don’t need any of that. If they break, you fix them. Sure, you have to fuel them whether that’s a fuel tank or a battery of some kind, and you have to replace worn parts, but they don’t have to eat 2-3 times a day. Killer robots don’t get sad or angry. They don’t fight amongst themselves or lose their patience and commit atrocities. And at the end of the conflict, we can put our killer robots back into their boxes and forget about them. They don’t suffer from PTSD, they don’t have trouble adjusting to civilian life afterwards, and they don’t require veteran’s benefits or services of any kind.

In his book Robot Ethics (2022), philosopher Mark Coeckelbergh devotes a chapter to killer robots. He poses a lot of questions to consider, including “if they make their own targeting and killing decisions, should systems be given some capacity for moral decision-making, or is this the wrong question? When humans are no longer in or on the loop, who is responsible when something goes wrong, like when civilians are killed?”

This is a significant question in the argument against killer robots. Currently in the military when something goes wrong like this there is an investigation and often someone is held responsible. In the US Army, leaders are often reminded of the Burden of Command: everything that happens or fails to happen during your command is your responsibility. We cannot really apply this to a robot though. What does it mean to hold the robot responsible? Even with embodied AI, in the end it was just doing what it was programmed to do. Are the engineers who programmed the AI responsible? The company who produced the robot? The government that purchased and used it? The military officer who made the determination to use the robot in this instance? In addition to the liability questions arising from the legitimate use of killer robot technology, there are also the following situations to consider: What happens if the robot is hacked? What is the liability when some bad actor acquires killer robots and uses them for criminal or terrorist purposes?

The argument in this case is quite simple: we shouldn’t be building and using killer robots until we can satisfactorily answer the question of accountability, and since there is likely never going to be agreement on accountability, we shouldn’t build or use killer robots.

Another argument against killer robots goes like this: What is the standard for killer robots, meaning how good do they need to be? Do they need to be 100% reliable and never cause civilian casualties? Or do they just need to be better than humans? How do we measure that? What assurances are there? Many argue that image recognition is not yet accurate enough to guarantee there won’t be civilian casualties caused by killer robots, or that the number of civilian casualties will be high enough to violate the principle of proportionality.

Over a decade ago, Noel Sharkey wrote The Evitability of Autonomous Robot Warfare (2013), in which among other arguments, he argues that robots are not capable of making proportionality decisions (the principle of proportionality holds that the damage to the civilian population must be proportional to the military goals): 

“What is the balance between loss of civilian lives and expected military advantage? Will a particular kinetic strike benefit the military objectives or hinder them because it upsets the local population? The list of questions is endless. The decision about what is proportional to direct military advantage is a human qualitative and subjective decision. It is imperative that such decisions are made by responsible, accountable human commanders who can weigh the options based on experience and situational awareness.”

This argument boils down to “battlefield decisions are too complex to be made by an algorithm.” Sharkey goes on to point out that no known roboticists at that time were conducting research into proportionality, and only a few were even suggesting it might be solvable by machines in the future.

To sum it up, the primary arguments against killer robots are that they should not be built or used because they cannot be held accountable for their actions, and who that accountability would fall to is unclear, they cannot accurately and reliably distinguish between combatants and non-combatants, and they cannot calculate proportionality because of its complexity.

Conclusion

Apart from a handful of isolated possible incidents, we are not yet living in a world where killer robots are regularly operating autonomously. We are however, living in a world where they exist. What will we choose to do with them? The question is complicated by its ambiguity; in other words, “Who’s the we?” We can make policy as a nation easily enough, but the United Nations is making slow progress on a resolution against them. If we look to history for comparable examples, we likely won't ever be successful in getting the world to agree on whether to use or not use killer robots. There are a number of groups such as Stop Killer Robots lobbying for a global ban on LAWS, and defense companies that design and build killer robots lobbying for the opposite. Regardless of what you think about the use of killer robots, I would encourage you to explore this topic further, and examine all the arguments for and against. If you happen to be a teacher, I would encourage you to do the same with your students.

For further reading:

Robot Ethics (2022). Mark Coeckelbergh. MIT Press.

Pros and Cons of Autonomous Weapons Systems (2017). Amitai and Oren Etzioni. Military Review. 

Humanitarian Benefits of Emerging Technologies in the Area of Lethal Autonomous Weapon Systems (2018). Submitted by the United States to the UN GGE.  

Stop Killer Robots (nonprofit organization)

Previous
Previous

Educators Eager Yet Uncertain About Embracing Artificial Intelligence in the Classroom

Next
Next

AI Trailblazers Charting New Territory