When public institutions procure IT services/products, including AI or algorithm-driven systems, they have to ensure the effective protection of human rights. This responsibility comes from international and/or constitutional obligations, often translated into more detailed norms for specific rights (e.g. when it comes to the right to data protection or the right to non-discrimination)and in the near future will be complemented by additional legal instruments, including the EU Artificial Intelligence Act. When procuring IT services/products, public institutions should make it explicit which public values and democratic principles should be preserved and safeguarded and there should be a genuine reflection of whether a technological solution is effective and appropriate for solving a specific problem or pursuing a specific public policy.
Human rights impact assessments play a key role in this reflection and they are essential for securing public trust in technology. In order to achieve this, impact assessments should be a mandatory practice where public values and human rights implications are properly considered, weighted and fully respected. Regardless of the adopted methodology, the assessment process must be transparent, accountable, participatory and embedded in the wider societal context on which technology might have impact.

The indicators for human rights impact assessment of IT services/products in procurement processes were developed within the frameworks of the project “Privacy by Design – Building an Inclusive digital ecosystem” by the cross-sector working group on business and human rights.
This document provides guidelines to check that both public institutions that procure IT services/products, including AI or algorithm-driven systems, and their vendors/developers have in place effective mechanisms to assess their impact on human rights and manage/mitigate their risks of harm. You can download the indicators on this link.

Image by Tumisu from Pixabay

Share: