Defining human responsibility and limits in the use of Artificial Intelligence for military weaponry

A recent report prepared by the Red Cross and the Stockholm Peace Research Institute (SIPRI) indicates that the production of weaponry systems directed by Artificial Intelligence (autonomous weapons systems, AWS) poses a set of dilemmas related to the ethics of armed conflicts and, of course, for International Humanitarian Law. A panel within the framework of […]

A recent report prepared by the Red Cross and the Stockholm Peace Research Institute (SIPRI) indicates that the production of weaponry systems directed by Artificial Intelligence (autonomous weapons systems, AWS) poses a set of dilemmas related to the ethics of armed conflicts and, of course, for International Humanitarian Law.

A panel within the framework of the 1980 United Nations Convention on Conventional Arms has been meeting for about eight years to discuss the role of this type of weapons; and although they apparently had been useful for maintaining the security of troops abroad and for the early detection of hostile attacks by insurgent groups, they are becoming problematic because of their full autonomy to decide on the life and death of people with no further connection to a decision maker who must always be a human being. Regulations are difficult to implement since the main characteristic of this type of weapons (which can be drones, tanks, mobile machine guns or others), is their ability to react autonomously in the face of the proximity of an individual who can be classified as an enemy combatant / military objective without any prior order, but simply by responding to a previously programmed algorithm.
It is well known that the definition of what is acceptable and what is not acceptable in the use of AWS continues to be a matter of controversy, due both to the interests of international organizations, which are trying to put limits on this type of system, and by States (mainly Western) who consider them necessary in their respective military campaigns abroad, and, of course, the arms lobbies who press for greater deregulation. Certainly, the report of the Red Cross and SIPRI makes important findings that must be considered: 1). Exercising human control over autonomous weapons systems-AWS is quite difficult to do while maintaining the “autonomy” of the system (precisely what it was designed for). 2). From an ethical and Human Rights perspective, the use of AWS generates a legal limbo where human responsibilities cannot be defined, so it is essential to develop some type of control in the use of force. 3). Even from an operational point of view, while AWS are useful because of their early warning systems for detecting and responding to attacks, these systems still remain unpredictable and can jeopardize the security of any military operation. 4). The controls to be implemented should include: restrictions on the type of military objectives, limitations on operations over time and terrain, remote deactivation systems, parameters for the recognition of civilian personnel and other measures. The States that apply this type of systems will have to define the parameters to apply these measures in practice, while the arms forums within the UN will have to establish stronger controls and clearer rules for the use of AWS.
To conclude, I just want to remember that if politics is always five steps ahead of law, it is notorious that technology is one hundred steps further. It is necessary to reduce the regulatory gap in the use of new technological devices, which when applied to the military field can generate truly disastrous results, as well as serious Human Rights problems for the States that use them.