Protection site

Hope and the Hype – Humanitarian Protection in the Digital Space – World

INTRODUCTION

Despite the many regulatory and political measures taken to limit human suffering during war, civilians living in conflict zones and violent environments remain the main victims of abuses and the effects of violence. Unfortunately, this reality shows no signs of slowing down. Some of the factors that fuel the violence include competing dynamics, protracted ethnic, religious and sectarian tensions, weak rule of law, unequal access to resources, poverty and the impact of climate change, to say the least. to name a few. In addition to these underlying drivers, the role of exponential digital technology – that is, a technology that expands and improves exponentially – needs to be closely examined.

Humanitarian organizations, such as the International Committee of the Red Cross (ICRC), work tirelessly to protect the lives and dignity of victims of armed conflict and other situations of violence and to prevent human suffering. . Although conflict and the associated humanitarian damage still manifests itself primarily in the physical world, recent technological developments have introduced new layers of complexity into how conflict and violence unfold and how they can harm lives. and the security of civilian populations on earth.

“Protecting people” is at the heart of the ICRC’s mandate. It is a complex field of work that must constantly adapt to changing realities. This has led to new activities, as well as continued reflections and engagement on the development and application of international humanitarian law (IHL), humanitarian policies and programs, and operational standards. Understanding the new digital challenges and their different implications is therefore essential for the ICRC and humanitarian protection workers as they begin to design ways to meet these challenges.

DIGITAL RISKS: WHAT HAVE WE OBSERVED?

While digital technologies can help improve the lives of individuals and communities affected by war and violence, depending on their uses, they can also create additional and disastrous risks.

Advances in technology have enabled new means and methods of warfare, such as cyber attacks, which today can disrupt or compromise critical infrastructure in countries at war remotely and anonymously. Cyber ​​warfare between parties to a conflict has been enhanced through technology. Such advances include the pace and scope of information that enables parties to a conflict to influence opinions, emotions and behavior on a large scale. Think for example of a rivalry between two parties that set up a single billboard in the city that can influence the perception and behavior of around a thousand people. And now, think of the same two groups that are using social media’s ability to influence billions of people across the globe by capturing attention and driving the economy while driving every click and influencing people’s behavior.

The spread of harmful content in the form of disinformation, disinformation and hate speech (MDH) on social media, which was quite present during the recent global crisis, is an example of how technology can amplify risks for the population, but also for humanitarian organizations trying to serve them. Recent history has seen situations of conflict in which rights groups have observed a significant increase in social media posts inciting violence against ethnic minorities and encouraging civilians to take up arms. Polarizing and bogus content has been widely disseminated on social media, fueling ethnic tensions and violence between communities. In some cases, some humanitarian organizations have also been suspected or at worst suspended due to the dissemination of false information.

Artificial intelligence can be used for various purposes and functions such as predictive analytics, assessment and monitoring, or supply chain management. At the same time, it also raises questions due to the bias it inevitably involves and its impact on people, as it reproduces, for example, patterns of discrimination and stigma by automating decision-making processes (education, salary, job interview, etc.). The role that artificial intelligence and algorithms can play in delivering harmful content online and creating echo chambers is increasingly being discussed and exposed in the public domain. The algorithms and machine learning models used by social media are designed to maximize online time and engagement, which in turn drives profit through increased ad exposure and click traps. And human engagement is often best assured by showing polarizing content that taps into deep emotions such as disgust, fear, and anger.

Another related exponential risk is the use and misuse of (personal) data which is collected on a large scale by “surfing the internet and social media” where we unintentionally disclose a lot of information about ourselves. Many of our lives will not necessarily be dramatically affected; for people living in conflict zones, the story may be very different as levels of vulnerability are higher and coping and resilience mechanisms are often severely weakened. Personal information of individuals, if misused, can lead to protection issues such as discrimination, forced displacement, persecution, detention, etc.

In recent years, growing concerns have been expressed about the safety of affected populations and individuals whose personal information could lead to their identification and follow-up by certain actors through their digital histories and their connections to social networks. Humanitarian organizations can contribute to these risks when they seek to try innovative responses and technologies such as cashless programs or biometric registration of people in already fragile contexts. Technologies are increasingly used in hopes of improving the efficiency and scale of humanitarian responses. However, this is often done with a techno-solutionist approach (not to mention techno-colonialism) and with a limited understanding of the technology used and the risks to those affected.

WHAT DOES THIS MEAN IN TERMS OF HUMANITARIAN PROTECTION?

Based on the above, two things can be deduced. First, digital technologies, depending on their uses, can lead to real protection or security problems, with serious humanitarian consequences. These can include, for example, both offline and online stigma, discrimination, denial of access to essential services, surveillance, persecution and attacks on the physical and psychological integrity of affected populations. . Second, our lives and what happens in them are no longer limited to the physical environment in which we live. More and more, what happens online has repercussions offline and vice versa. As humanitarian protection practitioners we need to be present on both fronts, but the question is how to do it and what does providing humanitarian protection in the digital space mean?

At this point there is no clear answer to this but there are a few things to guide our thinking.

The increasing use of digital technologies and data increases the need for strengthened data protection rights and safeguards to ensure the protection and dignity of people at risk, and the impartiality of humanitarian action. It is about moving from theory to practice. In other words, humanitarian organizations must learn and understand how to responsibly use digital technologies and process digital data. This includes understanding the consequences by asking whether digital tools that can expose people’s safety in one way or another are the right solutions to their problems and, if so, how to mitigate risks at different levels. This requires investments and training that should not be neglected. but rather prioritized, in particular by supporting organizations with more limited means.

Digital technologies have also introduced new types of actors into the conflict ecosystem, such as large tech companies, other private sector actors, hackers, etc. Humanitarian organizations must move beyond the hype and learn to engage both with traditional actors in armed conflict and new ones on the risks of deploying new technologies in armed conflict, the potential consequences on people and accountability. corresponding to mitigate damage. Such engagement must reflect the changing nature of conflicts and “battlefields”, as well as ways to ensure accountability in the digital space.

Understanding the vulnerabilities and risks people face and designing ways to address them is essential to providing meaningful humanitarian protection work. As these vulnerabilities and risks also manifest in the digital space, humanitarian organizations need to develop their capacities and skills, working closely with those affected and in partnership with academia, to detect and analyze these risks. and assess how they affect the security and dignity of the populations they are meant to serve. At the heart of these issues is the need for good techno-humanitarian analysis, but above all for a clear commitment to a Neutral, Impartial and Independent Humanitarian Approach (NIIHA) that can guide the sector towards better designs and more effective responses with those affected.

In today’s world, the greater the exponential power and technology, the more exponential risk is potentially created for people, especially those in a vulnerable situation, such as armed conflict. Defining what protection in the digital space is and how to deliver it is an area that requires urgent attention if we are to remain relevant in the design and implementation of our humanitarian effort.

I hope this discussion and work will help policymakers, from individual organizations to political leaders, recognize the hype, assess the humanitarian impact of emerging technologies more realistically, and invest in developing standards to deliver. humanitarian protection in the digital space.

Author | Delphine van Solinge priv.