What Are Lethal Autonomous Weapons and How Do They Work?
Lethal autonomous weapons — sometimes called 'killer robots' — can select and engage targets without a human pulling the trigger. Here's how they work, where they're already deployed, and why the world is racing to regulate them.
The Weapon That Chooses Its Own Target
A soldier fires a missile by pressing a button. A drone pilot selects a target on a screen. Both involve a human making the final lethal decision. But a new class of weapon is changing that equation entirely. Lethal autonomous weapons systems (LAWS) — often called "killer robots" in public debate — can identify, select, and attack targets on their own, once activated, with no human intervention in the moment of killing.
As militaries around the world accelerate investment in AI-driven warfare, understanding how these systems work — and why they alarm international lawyers, ethicists, and humanitarian organizations — has never been more important.
How Autonomous Weapons Actually Work
The core mechanism is straightforward in concept, though deeply complex in practice. An autonomous weapon is pre-programmed with a target profile — a set of characteristics the system uses to identify what it should engage. This might be the radar signature of an incoming missile, the heat signature of a tank engine, or, in more advanced systems, the movement pattern of a human combatant.
Once deployed, the weapon's onboard sensors — cameras, radar, infrared detectors, acoustic sensors — continuously scan the environment. When the system's AI judges that something matches the target profile, it triggers an attack without any further human input.
Many systems use machine learning, meaning their behavior is derived from training data rather than explicit rules. As the International Committee of the Red Cross (ICRC) notes, this can create a "black box" effect: even engineers may not be able to fully predict or explain every targeting decision the system makes.
A Spectrum of Autonomy
Not all autonomous weapons are equally autonomous. Experts typically describe a spectrum:
- Human-in-the-loop: A human approves every individual strike. The weapon is automated but not autonomous.
- Human-on-the-loop: The system acts autonomously but a human can override it within a narrow time window — common in missile defense.
- Human-out-of-the-loop: The weapon operates fully independently, with no practical opportunity for human intervention once launched.
The first category includes most armed drones in service today. The second includes systems like the U.S. Navy's Phalanx CIWS, a radar-guided cannon that has autonomously intercepted incoming missiles since the 1970s. The third — fully autonomous lethal systems targeting people — is where legal and ethical lines are being drawn.
Already in Use
Contrary to what many assume, these weapons are not purely theoretical. In 2020, a Kargu-2 drone manufactured by Turkish firm STM reportedly hunted down and attacked a human target in Libya — what a UN Security Council panel described as possibly the first lethal strike by an autonomous weapon against humans. In 2021, Israel deployed AI-guided drone swarms in combat operations in Gaza.
Multiple nations — including the United States, China, Russia, South Korea, and Israel — are actively developing and fielding systems with varying degrees of autonomous targeting capability, according to Stanford's Freeman Spogli Institute for International Studies.
The Core Legal and Ethical Problem
International humanitarian law requires that combatants distinguish between soldiers and civilians, assess proportionality, and take precautions before using lethal force. The ICRC argues that only humans can make these complex, context-sensitive judgments. A machine learning system that mistakes a farmer carrying a hoe for a soldier carrying a rifle cannot be held legally or morally accountable — and its victims have no recourse.
"Machines cannot exercise the complex and uniquely human judgments required on battlefields," the ICRC states in its position on autonomous weapons. Critics add that deploying weapons that operate without meaningful human control could lower the threshold for starting wars, since no soldiers are placed at risk on the deploying side.
The Push for Global Rules
In December 2024, the UN General Assembly adopted a resolution on lethal autonomous weapons by a vote of 166 in favor, with only Belarus, North Korea, and Russia opposed. More than 120 countries now support negotiating a legally binding treaty. The UN Secretary-General has called for new international law to be concluded by 2026.
The Human Rights Watch and the Stop Killer Robots coalition advocate for a treaty that would ban autonomous weapons designed to target people and impose strict human-control requirements on all others. So far, however, no binding agreement exists — and the technology is advancing faster than diplomacy.
Why It Matters
The debate over autonomous weapons is not abstract. As AI capabilities improve and military budgets pour resources into unmanned systems, the question of who — or what — makes the decision to kill is becoming one of the defining ethical questions of modern warfare. Whether international law can keep pace with the technology will shape the nature of armed conflict for decades to come.