Automated decision-making is becoming more commonplace and is increasingly found in critical sectors such as job recruitment, credentials score predictions and medical diagnostics. Helping individuals navigate high stakes environments (in terms of tempo and complexity) and efficiency in the face of ever-increasing volume of information to be processed (which can overload the human capacity to process data leading to stress and fatigue), the automation of these decisions is supposed to help us and limit the margin for human error.
However, the artificial intelligence behind these decisions are only as good as the data it’s been given and there are already too many cases where baked-in human biases shows its ugly face. These biases can lead to allocative (the allocation of an opportunity or resource) and representational harms (the reinforcement of stereotypes and discrimination against specific groups). Their “black box” model also complicates how these kinds of decisions are to be challenged because they are either too complicated for the average individual to understand or they are proprietary algorithms that companies refuse to explain.
This changes under the General Data Protection Regulation (GDPR). Starting off with Article 22 GDPR (an overview of the conditions for automated decision-making and its corresponding data subject rights and controller obligations is given here) and the controllers’ transparency obligations under Articles 12-15, data subjects need to be provided with meaningful information about the logic involved in their automated decision-making; in other words, they have a “right to explanation”.
This right is great in theory but not much guidance is provided in the way of it means or what is required to explain an algorithm’s decision. An example can be found in machine learning: these algorithms are trained by being fed large amounts of data and being left to independently (in varying degrees) find significant correlations. The criticism is that these correlations represent a probability that things will turn out similarly in the future without providing an explanation why this should be the case. Bryce Goodman, a researcher at the Oxford Internet Institute argued that machine learning algorithms “just discover things by trial and error” and that often “there’s nothing literally ‘intelligent’ in the way they take decisions. This makes it basically impossible to explain their choices in human terms.”
So how does one begin to explain the logic underlying an automated decision? Keeping the explanation data subject-friendly in line with the transparency guidelines, there is no choice but to get technical. Start with a technical description of the model used and the data it was trained on. The model concerns the kind of automated decision-making, whether it is a neural net, support vector machine, or involves ensemble methods (like random forests). Each model has its own level of opacity, informing you of the most effective way to explain the underlying logic. It will also be necessary to discuss with your technical experts what kind of data is used, paying attention to question like the categories of personal data processed and how it was collected.
At a minimum, an explanation should provide an account of how input features relate to predictions and allows an individual to answer questions like: Is the model more or less likely to identify a higher risk if the applicant is a minority? Or, which features play the biggest role in prediction?