HomePostsDigital RightsWhat does Automated Decision-Making Portend for the Fight Against Discrimination in Developing...

Related Posts

What does Automated Decision-Making Portend for the Fight Against Discrimination in Developing Countries?

Reading Time: 4 minutes
Print Friendly, PDF & Email

This post is a summary of an existing article Algorithmic Decision-Making and Discrimination in Developing Countries, published by the same author.

The use of machine learning algorithms to make decisions that impact our lives continues to grow by the day. I use machine learning in this essay to refer to the enterprise where algorithms are used to extract a function from a dataset, and the function is thereafter fed to other algorithms which make decisions. In developing countries, machine learning algorithms are being used to make credit, subsidized state housing and educational placement decisions. What does the increased use of machine-learning tools to make recommendations, predictions and decisions mean for discrimination law in developing countries? In my recently published journal article, I argue that there is a serious reason for us to be concerned.

AI decision-making will pose a significant challenge for discrimination law across the world

Automated decision-making poses significant challenges to discrimination law in any country. To begin with, discrimination law that focuses on proving direct discrimination is on the ropes. A vast majority of countries across the world have a fairly similar feature in their direct discrimination laws: the requirement of proof of a significant correlation between a protected ground and an impugned decision. At the moment, it is more or less impossible for anyone to point out with precision which particular factors a deep learning algorithm uses to arrive at an outcome, so it will be hard to prove that a protected ground has been used as a key factor in an automated decision. Indirect discrimination may be easier to demonstrate (by showing statistical evidence, for example) since it only requires evidence that disparate treatment of a protected group has occurred. However, indirect discrimination law in most countries across the world usually allows for several possible justifications to excuse a defendant. One justification that will be easily raised when contesting an automated decision is business efficiency. Once a defendant claims that an algorithm was used because of the business efficiency it offered, it will be very difficult for the petitioner to disprove such a position. Given this situation, putting in place requirements that reduce the chances of algorithms resulting in direct and indirect discrimination assumes much higher importance. One key way of doing so is ex-ante scrutiny of algorithms. Yet even then, as computer scientists have long shown us, algorithms will often not perform the same way during testing and actual use. No matter how many times we subject an algorithm to ex-ante testing, we can never be certain what outcomes it will lead to when set into its real environment.

Developing countries face an even steeper challenge

Computer scientists are working on building tools that will make it easier for us to understand how algorithms arrive at their outcomes. Until that happens, the most plausible solution to the challenges of preventing and proving discrimination via automated decision-making to me is to require states to put in place certain measures to scrutinize automated decision-making. These measures include requirements regarding the algorithmic training process and testing before an algorithm is run in its real environment. At the same time, there need to be measures that enable consistent scrutiny of automated decision-making outcomes. I argue in my article that for these measures to succeed, a particular type of institutional framework is required. The key elements of such a framework include: (a) a determined endorsement of transparency norms and the regular study and publication of statistical data on the disparities faced by people who belong to protected groups, (b)the existence of vigilant non-governmental actors focused on automated decision-making, and (c) the existence of a reasonably robust and proactive executive branch or an independent office to police discrimination.

The first feature would give non-government actors the tools to assess the working of algorithms and their outcomes without state assistance. The second and third features are important because of the significant knowledge gap between the user (who uses an algorithm to decide) and the individual affected by the automated decision. The latter will likely not even know that they have been discriminated against.

To assess whether the framework described in the preceding part exists in developing countries, I examined the situations in five countries: Kenya, India, Nigeria, South Africa, and the Philippines. On the policing of discrimination, I unsurprisingly found that all the countries taken as case studies have some law that prohibits discrimination. Although all five countries also have some sort of independent office set up to specifically protect people’s human rights, I did not find evidence that any of the established offices proactively investigated discrimination in the countries. Instead, any affected persons often have to discover and pursue discrimination claims on their own. Additionally, in all the countries studied, executive branch actors have historically played no serious role in policing discrimination. This is a culture sustained by the absence of any legal requirement for executive agencies to play such a role and the general under prioritization of the fight against discrimination.

All the countries have a significant number of non-government actors whose mission is the defence of human rights. While you may find that some have published the occasional report on automated decision-making, none of the ones I found is especially focused on this. Finally, although all the countries studied have given their citizens the right to expect a certain degree of access to information via their constitutional frameworks, I found that it is rare and difficult for people to access the information they seek, even from state departments. Private entities are certainly not required to allow access to information about how any of the tools they use work. My research thus found that the five countries struggle on each of the three fronts I laid out. Due to similar governance inefficiencies and socio-economic conditions, we can confidently predict that the findings would be replicated for the vast majority of developing countries.

Conclusion

Because of the commercial promise they carry, the use of machine learning tools to make decisions is bound to grow everywhere across the world. My research demonstrates that people who live in developing countries are especially vulnerable when it comes to injustices that result from such tools, and anti-discrimination advocates need to pay special attention to ensure automated decision-making does not embed inequalities in societies in which they are used.

Suggested Citation

Cecil Abungu, ‘What does Automated Decision-Making Portend for the Fight Against Discrimination in Developing Countries?’ (The Digital Constitutionalist, 09 March 2022) <https://digi-con.org/what-does-automated-decision-making-portend-for-the-fight-against-discrimination-in-developing-countries/>

Cecil Abungu
Teaching fellow, Strathmore Law School & Research Affiliate, Legal Priorities Project

Teaching fellow, Strathmore Law School & Research Affiliate, Legal Priorities Project.

[citationic]

Featured Artist