ICRC Humanitarian Law and Policy Blog

ICRC Law and Policy

The International Committee of the Red Cross (ICRC) Humanitarian Law & Policy blog is a unique space for timely analysis and debate on international humanitarian law (IHL) issues and the policies that shape humanitarian action. read less
NewsNews

Episodes

Will AI fundamentally alter how wars are initiated, fought and concluded?
26-09-2024
Will AI fundamentally alter how wars are initiated, fought and concluded?
In the debate on how artificial intelligence (AI) will impact military strategy and decision-making, a key question is who makes better decisions — humans or machines? Advocates for a better leveraging of artificial intelligence point to heuristics and human error, arguing that new technologies can reduce civilian suffering through more precise targeting and greater legal compliance. The counter argument is that AI-enabled decision-making can be as bad, if not worse, than that done by humans, and that the scope for mistakes creates disproportionate risks. What these debates overlook is that it may not be possible for machines to replicate all dimensions of human decision-making. Moreover, we may not want them to. In this post, Erica Harper, Head of Research and Policy at the Geneva Academy of International Humanitarian Law and Human Rights, sets out the possible implications of AI-enabled military decision-making as this relates to the initiation of war, the waging of conflict, and peacebuilding. She highlights that while such use of AI may create positive externalities — including in terms of prevention and harm mitigation — the risks are profound. These include the potential for a new era of opportunistic warfare, a mainstreaming of violence desensitization and missed opportunities for peace. Such potential needs to be assessed in terms of the current state of multilateral fragility, and factored into AI policy-making at the regional and international levels.
Conceive, standardize, integrate: distinctive emblems and signs under IHL
12-09-2024
Conceive, standardize, integrate: distinctive emblems and signs under IHL
When the very first Geneva Convention was adopted in 1864, it was the culmination of several interwoven humanitarian projects of the ICRC’s principal founder, Henry Dunant. One of those ambitions was the conception, standardization, and integration into what would become known as international humanitarian law (IHL) of the distinctive emblem of the Convention. Designed to signal the specific protections IHL accords to the medical services and certain humanitarian operations, the emblem – today the red cross, red crescent, and red crystal – is displayed on different persons and objects in the physical world, including on buildings, transports, units, equipment, and personnel that are accorded these protections. Over its 160-year history, the distinctive emblem has saved countless lives. Today, the ICRC is again engaged in a project to conceive, standardize, and integrate into IHL a means to identify those very same specific protections, but in a way the drafters of the original 1864 Geneva Convention could not have imagined: a digital emblem specifically designed to identify the digital assets of the medical services and certain humanitarian operations. In this post, building on previous work on this topic, ICRC Legal Adviser Samit D’Cunha summarizes some of the key milestones of the history and development of the distinctive emblem and explores how these milestones serve as a lodestone – or compass – for the Digital Emblem Project’s path forward.
Artificial intelligence in military decision-making: supporting humans, not replacing them
29-08-2024
Artificial intelligence in military decision-making: supporting humans, not replacing them
The desire to develop technological solutions to help militaries in their decision-making processes is not new. However, more recently, we have witnessed militaries incorporating increasingly complex forms of artificial intelligence-based decision support systems (AI DSS) in their decision-making process, including decisions on the use of force. The novelty of this development is that the process by which these AI DSS function challenges the human’s ability to exercise judgement in military decision-making processes. This potential erosion of human judgement raises several legal, humanitarian and ethical challenges and risks, especially in relation to military decisions that have a significant impact on people’s lives, their dignity, and their communities. It is in light of this development that we must urgently and in earnest discuss how these systems are used and their impact on people affected by armed conflict. With this post, Wen Zhou, Legal Adviser with the International Committee of the Red Cross (ICRC), and Anna Rosalie Greipl, Researcher at the Geneva Academy of International Humanitarian Law and Human Rights, launch a new series on artificial intelligence (AI) in military decision-making. To start the discussion, they outline some of the challenges and risks, as well as the potential, that pertain to the use of AI DSS in preserving human judgement in legal determinations on the use of force. They also propose some measures and constraints regarding the design and use of AI DSS in these decision-making processes that can inform current and future debates on military AI governance, in order to ensure compliance with international humanitarian law (IHL) and support mitigating the risk of harm to people affected by those decisions.