This thematic area examines the measures countries have in place in order to prevent and mitigate the risk of discrimination posed by biases in the design, development and use of AI.
Bias is defined as a prejudice for - or against - an individual or group in a way that is oftentimes unfair.
Unfair discrimination is the practice of treating an individual or group of people in a way that is unjustifiable because it is not intended to achieve greater levels of equity or advance members of a previously disadvantaged group.
Country and regional scores
Bright spots
Relevance
Although some countries have made significant progress in protecting the right to equality and non-discrimination, deeply embedded biases in AI systems can undermine these advancements, further excluding marginalized individuals from economic, social, and political opportunities. Numerous studies have shown that algorithms trained with biased datasets or designed without inclusivity can produce discriminatory outcomes. For example, the 2018 GenderShades project by MIT’s Media Lab found a 34.7 percent error rate for dark-skinned women compared to 0.8 percent for light-skinned men across three AI-powered facial recognition systems. This research not only exposed significant biases in facial recognition accuracy based on race and gender but also highlighted that algorithmic fairness depends on various contextual factors.
Country rankings by Bias and unfair discrimination
*In the table above, data was collected for Chinese Taipei (commonly referred to as Taiwan) and the West Bank, which is under the administration of the Palestinian Authority. No data collection took place in Gaza. We recognize that these are, at the time of writing, disputed jurisdictions.