This training took place in hybrid format on 1 December from 9:00 to 16:30 CET. It targeted Equality Bodies, with focus on members of Equinet’s Cluster on Artificial Intelligence, in order to improve their knowledge on how to identify cases of algorithmic discrimination.
The notorious “black box “ of automated decision-making, including through Artificial Intelligence (AI)-enabled systems, poses a serious and widely discussed challenge to the effectiveness of non-discrimination law. If underreporting has been known to undermine the strength of legal protection against discrimination, then AI-systems threaten to exponentially increase its scale and negative impact. With victims of discriminatory algorithms often being left unaware that they are discriminated against, the responsibility of identifying and tackling AI-enabled discrimination could increasingly fall upon Equality Bodies, whether working alone or alongside with relevant national sectoral regulators.
The training addressed this challenge by highlighting and exploring various ways in which equality bodies could look for and ultimately successfully identify cases of algorithmic discrimination.
If you have any questions regarding this training and/or experience difficulties with accessing the AI website or any of its content, please contact Milla Vidina, Policy Officer, Equinet Secretariat (firstname.lastname@example.org).