Algorithms in public administration

We are witnessing a rapidly growing leading role for algorithms, both in civil law based relations like customers in relation to Netflix as in public administration relations where we have citizens on one side and the (mostly) executive branch of the government on the other. When the algorithms are using personal data to do their automated calculation (in the Netherlands we even profile our dykes), and result in a decision which produces legal effects concerning him or her or similarly significantly affects him or her, the frame work is provided in the new article  22 of the GDPR  ‘Automated individual decision-making, including profiling’.

3
22 GDPR 
In this figure I’ve tried to show how the three elements of 22 GDPR are related. Only the left part doubles are part of 22 GDPR, including a part of the profiles.

It’s very interesting we now see a clarification we missed in directive 95/46/EC; the notion that profiling and automated decision-making may look similar but are in fact two different activities. They have different characteristics and will bring different risks to data subjects/citizens. Profiling can result in automated decision-making but automated decision-making doesn’t need to be based on profiles.

This blogpost is about these differences. Based on case-studies I performed in my thesis on Effective Remedies Against Automated Decisions in Tax and Social Security I summed up some of these differences. Please feel free to submit your feedback.

Autonomous handling systems, known as automated decision-making

Automated decision-making is used in public administration for ages. Jon Bing recalled a German example of computer conscious lawmaking in 1958 (Bing 2005: 203). In the Netherlands the tax administration still partly ‘runs’ on programs build in the seventies. Researchers call these systems ‘autonomous handling systems’. They not only produce the legal decision; they execute them as well by for instance transferring money from the government to the citizen. Some of these systems appear to be ‘expertsystems’, but in the Netherlands we  have a very simple but effective way of giving automated administrative fines to drivers of cars that didn’t get their technical checks in time, just by matching two databases.

These automated decisions run on personal data and algorithms. The law, publicly debated, result of a democratic process and made transparent and accessible to everybody, has to be transformed in language the computer understands; the algortihms (Wiese Schartum 2016:16). There are many issues that occur when we study automated mass administrative decision systems, like; who has the authority to interpret the law? (Bovens & Zouridis 2002) Is the source code made public? What is the legal status of this quasi legislation (Wiese Schartum 2016:19) How can an individual object these decisions? What if the computer decisions have a very high error rate? (as appears to be the case right now in Australia with Centrelink , Online Compliance Intervention System)

Automated profiling

Profiling on the other hand, if it is done by a computer since we happen to profile manually all our lives, is not as old. This makes sense because in order to be able to automate profiling, very large databases are needed and data processors that can work at a very high speed. Unlike automated decision making, profiling in public administration is only at the beginning of the technological capacities. As well as in automated decision-making, profiling (in article 22 GDPR) runs on personal data and algorithms. The difference is that profiling is used to predict the behavior of citizens and to single out those who behave differently from the main group. Based on data referring to their behavior, health, economic situation etc, the algorithm tries to bring some kind of order in the mass. The results can be of administrative internal work flow use, like ‘which tax deductions might be a risk to accept and are to be reviewed by a human?’ ‘which kind of citizens are likely to apply for benefits?’, hardly visible for citizens.

artistic-2063_1920
Diversity of citizens represented by coloring pencils. Categorizing them, attributing data and math can make you decide with what pencil you’ll start. 
But in some cases, the results can be more invasive; like legal decisions or surveillance activities (like frisking) aimed at only those travelers the algorithm has chosen. In unstructured big data, a pattern is made visible by datamining technologies and this pattern is used to predict other similar cases. Think of Netflix; offering you logical choices out of a massive number of movies and documentaries. Some of the same issues arise as in automated decisions, some of them differ; like how can we prevent the algorithms from bias? Who has the authority to ignore the predictions? Is transparency of algorithms helpful or put it differently; how could I as a citizen ever understand the algorithm? What if the computer finds a relation based on inherent human characteristics like health or genetics or race?

Dutch proposal executing GDPR

The difference has made the Dutch legislator propose different measures when it comes down to algorithms in public administration. The proposal has been subject to a public internet consultation before it will follow the procedural route.

Automated decision making in public administration will be permitted (and the right of data subject not to be subject to a decision overruled) if necessary for compliance with a legal obligation, or necessary for the performance of a task carried out in the public interest, and measures have been taking (by the controller) to safeguard legitimate interests of the data subject. As well the data subjects have (still) the right to ask the logic of the program on which the decision is based is shared. Other safeguards for the data subjects, like the right to ask for a review, are provided by General Administrative Law Act. To my knowledge, previous safeguards in directive 95/46 have never been invoked in disputes concerning automated decisions in administrative case law.

On automated profiling the Dutch legislator seems to stick to the rules of 22 GDRP, meaning that the right of the data subject not to be subject to a decision based on automated profiling (which produces legal effects concerning him or her or similarly significantly affects him or her) doesn’t apply if the decision is authorized by Dutch Law. So far, we seem to have one profiling project in public administration with legal basis; Syri. More info in English in this publication.

Other profiling initiatives in public administration seem -until today- not to lead to legal decisions. They are used to see which citizens or applies have to be checked by humans. It’s unlikely this kind of profiling match the definition in 22 GDPR. If the proposal of execution law of GDPR in Netherlands is accepted, automated profiles that do lead to invasive consequences but not to legal decisions -as is the case in border controls or police monitoring- will need a legal basis in law.

It will be interesting to see if some anti-fraud measures based on automated profiles are similarly as legal effects significantly affecting the data subject. Like deviating a digital apply for a benefit in a face to face apply because the data subject fits the profile of fraudster. Offering green lanes or denying them. Is it significantly (similarly as legal effects) affecting a person to have to proof his existence and papers at a desk or isn’t it? Based on the polls I took while teaching, at least most civil servants don’t consider this significantly affecting.

Why is the difference important?

It is important to stay alert on these differences when we discuss 22 GDRP. Some of the problems will seem to be the same; incorrect decisions, unknown algorithms, people who don’t fit in the predesigned categories and seek for individual fairness (Einzellfalgerechtigkeit) but it would be a mistake to treat the activities the same. Profiling is calculation of probability, and fundamentally different from a calculation of my administrative data with variables explicitly mentioned in law.

Profiling uses not only more (big) data, the data are more decontextualized, the data contain data we give away unconsciously (like using or not using an app the government provides for, like changing your online apply a few times before sending it) and they will only consider the past to predict the future. In light of the GDPR where a lot of exemptions are made for research and analyzing personal data, it will depend on how governments use profiles. Accountability for these profiles is not only important in relation to data protection. It will raise all kind of fundamental rights questions.

To conclude this interesting matter I could talk about for days, I’ve come up with this short and simple chart:

  Automated decision making Automated Profiling
function Engine of auto bureaucracy Accessory of bureaucracy
 

input

 

Structured personal data

 

Unstructured personal data

 

output

 

Calculation Calculation of probability
algorithms Written by humans based on law  

Selflearning or written by humans based on mathematical correlations

 

data

 

Administrative data as variables in algortihms based on law

 

Big data on all kind of aspects, including behavior.

 

decision

 

 

Aimed at a specific citizen based on his/her administrative data (like age, income etc)

 

 

Based on data of people in similar circumstances, aimed to singling out.

 

Officially in the Netherlands since

 

1970s (social security, motor car tax)

 

2014 (SyRI, social security fraud detection)

 

Jargon

 

Business rules

 

Risk rules / self learning

(lit.)

Bovens & Zouridis 2002

Bovens & S. Zouridis, ‘ From Street-Level to System-Level Bureaucracies: How Information and Communication Technology is Transforming Administrative Discretion and Constitutional Control’. Public Administration Review, 2002, vol. 62-No. 2.

Bing 2005

Jon Bing, ‘Code, Acces and Control’ in ‘Human Rights in the Digital Age’ M. Klang & A. Murray (eds), Cavendish Publishing Ltd 2005.

Wiese Schartum 2016

Dag Wiese Schartum, ‘Law and algorithms in the public domain.’ Etikk i praksis. Nordic Journal of Applied Ethics. 2016, 10 (1). DOI: http://dx.doi.org/10.5324/eip.v10i1.1973

Advertenties

Geef een reactie

Vul je gegevens in of klik op een icoon om in te loggen.

WordPress.com logo

Je reageert onder je WordPress.com account. Log uit / Bijwerken )

Twitter-afbeelding

Je reageert onder je Twitter account. Log uit / Bijwerken )

Facebook foto

Je reageert onder je Facebook account. Log uit / Bijwerken )

Google+ photo

Je reageert onder je Google+ account. Log uit / Bijwerken )

Verbinden met %s