Equinet, in collaboration with the European Network of National Human Rights Institutions (ENNHRI), has issued a joint statement urging policymakers to prioritise equality and fundamental rights considerations in the ongoing trilogue negotiations for the EU Artificial Intelligence Act (AI Act).
Read our key recommendations!
Read the Equinet-ENNHRI Joint statement!
EU AI Act’s potential for equality-compliant use of Artificial Intelligence
The AI Act is set to build the first regulatory framework for governing the use of Artificial Intelligence (AI) in the EU. By embedding an equality and fundamental rights perspective, the AI Act may live up to its potential as a tool for promoting equality and protecting the rights of individuals in the current context of expanding uses of AI systems.
In July, Equinet published a set of recommendations to ensure the EU AI Act’s compliance with equality and non-discrimination law, to remove barriers to its enforcement and complement it with additional AI-specific safeguards. These recommendations made specific and actionable suggestions for consideration that the negotiations should take into account. These key asks included, among others, the following: ensuring access to justice for rights-holders through stronger provisions on remedies, consistency and coherence between national enforcement under the AIA and existing enforcement of non-discrimination obligations through cooperation with Equality Bodies.
Regrettably, the trilogue negotiations have partially fallen short of adequately addressing equality and fundamental rights consideration, motivating the joint statement between Equinet and ENNHRI. This joint call is based on the expertise and experience of over 60 independent national authorities established by constitution or law to protect and promote fundamental rights and equality in over 40 European states.
Equinet and ENNHRI Joint Statement
With this joint statement, Equinet and ENNHRI ask policymakers to prioritise equality and fundamental rights organisation. In particular, we ask to:
- Ensure effective and robust enforcement and governance framework for foundation models (FMs) and high-impact foundation models (HIFMs) – keep mandatory third-party independent risk assessment, include fundamental rights expertise and place stronger oversight on FMs and generative AI to prevent gaps in fundamental rights protection through a tiered approach;
- Ensure robust legal protection for high-risk systems (revert to EC original proposal on Article 6) to ensure the AI Act’s goal of improving the functioning of the internal market and promoting a human-centric and trustworthy artificial intelligence;
- Ensure effective cooperation between the AI Office and national supervisory authorities and independent public enforcement mechanisms specialised in non-discrimination and fundamental rights protection (Equality Bodies and National Human Rights Institutions (NHRIs);
- Guarantee access to justice for victims of AI-enable discrimination by creating a redress mechanism for the AI Office and rights of individual and collective representation. In addition, due to the difficulties for victims to know that they are subject to discrimination and claim their rights, Equality Bodies, NHRIs and other relevant public interest organizations must be explicitly given the power to submit complaints to supervisory authorities and the AI Office in their own name and without any identifiable victims;
- Oblige deployers of AI systems, in both the public and the private sector, to carry out fundamental rights impact assessment so that negative impacts on equality and fundamental rights may be prevented;
- Ban biometric and surveillance practices, which pose unacceptable risks to equality and human rights, as Equality Bodies and National Human Rights Institutions proved;
- Ban predictive policing for criminal and administrative offences, which undermines the right to non-discrimination, to an effective remedy and a fair trial, and the presumption of innocence. Artificial Intelligence systems embed structural biases, potentially leading to disproportionate over-policing of certain groups of people.
Read our key recommendations!
Read the Equinet-ENNHRI Joint statement!