The Federal Anti-Discrimination Agency published a study on the risks of discrimination caused by algorithms in November 2019, and have just released the English version. The study focuses on algorithms that are used for data processing and semi- or full-automated implementations of decision rules to differentiate between individuals. Such differentiations relate to commercial products, services, positions or payments as well as to state decisions and actions that affect individual freedoms or the distribution of services.
Many examples illustrate that detecting and proving discrimination with algorithms is possible without direct inspection of the algorithm or “opening” the software system. Instead, evidence of unequal treatment or discrimination can be provided by collecting and investigating publicly available data on the outcomes of differentiation decisions, which are derived from the interactions and transactions of the services and products investigated.
Together with requests and information from those affected and the media, such investigations of the outcomes can serve as starting points for the equality bodies’ tasks of advising and supporting persons affected by discrimination, also for the online area and automated decision-making. If there is not enough information publicly available on the outcomes of decisions, the access options and rights of equality bodies should be extended so that they can fulfil their mandate to identify and reduce discrimination.
For personalized offers and services, enabled by algorithms, it can be difficult for affected persons to detect differentiations of persons and to make comparisons in order to provide evidence for unequal treatments. Equality bodies generally have expertise and experiences with regard to groups of persons, situations and treatments prone to discrimination, the usual rationales for discrimination, seemingly neutral criteria and correlations with protected characteristics. Such expertise can be starting points for systematic empirical investigations, anti-discrimination testings and algorithm audits.
In particular, the use of artificial intelligence algorithms and applications in automated decision-making can require by legal provision that entities using them assess discrimination risks, document, among other things, their functioning and decision rules, and ensure explainability also with regard to possible consequences including unequal treatment. Such documentation should be accessible to equality bodies in cases of suspected discrimination, with the right of access being regulated by law.
Other (potential) tasks of equality bodies include advising entities developing and implementing algorithms on the prevention of discrimination and (mandatory) involvement in public procurement procedures of algorithm-based systems that are particularly prone to discrimination.
This study was funded by a grant from the Federal Anti-Discrimination Agency (FADA) and written by Dr Carsten Orwat, Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology (KIT).
Read the Equinet Report: “Regulating for an Equal AI: A New Role for Equality Bodies” on the consequences of digitalisation for (in)equality and the role national equality bodies (NEBs) can play in this field.
The report shows that equality should be a central consideration in any EU approach on the human and ethical implications of AI. Equality bodies have a vital role to play in securing the benefits of AI, however, national authorities need to provide them with adequate and meaningful powers as well as secure and sufficient resources.