This thematic area measures steps countries have taken to integrate the principles of proportionality and do no harm into AI systems, and to implement safeguards in the design and operation of AI systems.
Proportionality can be defined as ‘the fact or quality of being in proper balance or relation as to size or quantity, degree, severity, etc’. As a foundational principle of international law, proportionality supports the need for actions to be both fair and just.
Country and regional scores
Bright spots
Relevance
AI systems are increasingly deployed across diverse industries, including healthcare, education, agriculture, and peace and security sectors like policing and military applications. To ensure responsible deployment, AI designers, developers, and deployers must strike a balance between harnessing AI benefits and mitigating potential harm. This requires adhering to the principles of proportionality and 'do no harm', which advocate for implementing measures to limit or restrict the design, development, and deployment of technologies to what is necessary for achieving their intended goals while minimizing associated risks.
It is crucial for AI actors to center their practices on the foundational principle of 'do no harm', not only in high-risk technology contexts but also in high-risk environments. They should proactively mitigate risks and address harms caused by AI to prevent human rights abuses over time.
Country rankings by Proportionality and do no harm
*In the table above, data was collected for Chinese Taipei (commonly referred to as Taiwan) and the West Bank, which is under the administration of the Palestinian Authority. No data collection took place in Gaza. We recognize that these are, at the time of writing, disputed jurisdictions.