Killer drones are now a reality in most major armies. Public opinion is afraid of them, seeing them as the precursors of intelligent and totally autonomous robots, ready to free themselves from human control, like the Terminator.
The law says very little about the use of military drones and existing laws only consider human-operated drones.
So what would be the rules applicable to the use of autonomous military drones or "autonomous lethal weapon systems" (SALA)?
This includes in general all robots that could make the decision to kill themselves on the battlefield or for law enforcement purposes.
The UN Special Rapporteur defines them as "weapon systems that, once activated, allow targets to be selected and processed without the intervention of a human operator".
The moral debate
On a moral level, the use of killer drones raises strong criticism from its detractors. The most common criticisms are as follows:
An assassination by a killer drone would be contrary to human dignity. However, this argument seems weak. In which case is killing consistent with human dignity?
Killer drones would be more easily rejected by the population. This argument would hit the nail on the head. Studies have shown much lower acceptability by local populations (particularly in Pakistan and Afghanistan) in the case of deaths caused by drones, compared to losses caused in "human" fighting.
A drone war is a bit like a video game, leading to the disempowerment of the belligerents. This is probably the strongest argument and that should be considered.
The legal debate
War is not without laws. It is codified, in particular by the Geneva Convention and its Additional Protocols, as well as customary humanitarian law.
Failure by the responsible officers to comply with these rules may lead to the conviction of war crimes.
Among the principles of the law of war that will apply to the use of killer drones are the principles of precaution, distinction and proportionality. In other words, civilians should not be targeted for attacks and damage should be limited to the maximum extent possible.
One temptation in this debate is to expect the machine to be infallible.
It is interesting in this respect to refer to the famous Arkin test (named after the American robotician Ronald C. Arkin), a kind of adaptation of the famous Turing test in terms of artificial intelligence.
According to Arkin, the robot's infallibility is a decoy. The behaviour of the machine must in fact be indistinguishable from human behaviour in a given context. A robot meets legal and moral requirements - and can therefore be deployed - when it has been demonstrated that it can respect the law of armed conflict as well as or better than a human being in similar circumstances.
This test could then lead to surprising legal conclusions: if the machine proves to be more reliable than man, then its use should be mandatory, at the risk of causing the belligerent's liability to be based on the human, more fallible...
What liability regime in a military context?
What could be proposed as the outline of a liability regime for the use of autonomous AI in a military context? Such a regime should be similar to "safeguards" for military officials:
Only "military objectives by nature" within the meaning of the law of war (IP, art. 52-2) should be targeted by the machine, i.e. objectives that give rise to little or no interpretation (e.g. a tank on a battlefield).
Some contexts should be excluded because they are too subject to interpretation by the machine, and therefore to error (e. g. an urban environment).
The "benefit of the doubt" should be programmed by default, and not deactivatable.
The possibility of remotely disabling the firing function (veto power) should also be programmed. This is in compliance with rule 19 of customary international humanitarian law: "Do everything feasible to cancel or suspend an attack when it appears that its objective is not military or that it can be expected to cause incidental civilian casualties".