Tagarchief: Jon Bing

Algorithms in public administration

We are witnessing a rapidly growing leading role for algorithms, both in civil law based relations like customers in relation to Netflix as in public administration relations where we have citizens on one side and the (mostly) executive branch of the government on the other. When the algorithms are using personal data to do their automated calculation (in the Netherlands we even profile our dykes), and result in a decision which produces legal effects concerning him or her or similarly significantly affects him or her, the frame work is provided in the new article  22 of the GDPR  ‘Automated individual decision-making, including profiling’.

22 GDPR 
In this figure I’ve tried to show how the three elements of 22 GDPR are related. Only the left part doubles are part of 22 GDPR, including a part of the profiles.

It’s very interesting we now see a clarification we missed in directive 95/46/EC; the notion that profiling and automated decision-making may look similar but are in fact two different activities. They have different characteristics and will bring different risks to data subjects/citizens. Profiling can result in automated decision-making but automated decision-making doesn’t need to be based on profiles.

This blogpost is about these differences. Based on case-studies I performed in my thesis on Effective Remedies Against Automated Decisions in Tax and Social Security I summed up some of these differences. Please feel free to submit your feedback.

Autonomous handling systems, known as automated decision-making

Automated decision-making is used in public administration for ages. Jon Bing recalled a German example of computer conscious lawmaking in 1958 (Bing 2005: 203). In the Netherlands the tax administration still partly ‘runs’ on programs build in the seventies. Researchers call these systems ‘autonomous handling systems’. They not only produce the legal decision; they execute them as well by for instance transferring money from the government to the citizen. Some of these systems appear to be ‘expertsystems’, but in the Netherlands we  have a very simple but effective way of giving automated administrative fines to drivers of cars that didn’t get their technical checks in time, just by matching two databases.

These automated decisions run on personal data and algorithms. The law, publicly debated, result of a democratic process and made transparent and accessible to everybody, has to be transformed in language the computer understands; the algortihms (Wiese Schartum 2016:16). There are many issues that occur when we study automated mass administrative decision systems, like; who has the authority to interpret the law? (Bovens & Zouridis 2002) Is the source code made public? What is the legal status of this quasi legislation (Wiese Schartum 2016:19) How can an individual object these decisions? What if the computer decisions have a very high error rate? (as appears to be the case right now in Australia with Centrelink , Online Compliance Intervention System)

Automated profiling

Profiling on the other hand, if it is done by a computer since we happen to profile manually all our lives, is not as old. This makes sense because in order to be able to automate profiling, very large databases are needed and data processors that can work at a very high speed. Unlike automated decision making, profiling in public administration is only at the beginning of the technological capacities. As well as in automated decision-making, profiling (in article 22 GDPR) runs on personal data and algorithms. The difference is that profiling is used to predict the behavior of citizens and to single out those who behave differently from the main group. Based on data referring to their behavior, health, economic situation etc, the algorithm tries to bring some kind of order in the mass. The results can be of administrative internal work flow use, like ‘which tax deductions might be a risk to accept and are to be reviewed by a human?’ ‘which kind of citizens are likely to apply for benefits?’, hardly visible for citizens.

Diversity of citizens represented by coloring pencils. Categorizing them, attributing data and math can make you decide with what pencil you’ll start. 
But in some cases, the results can be more invasive; like legal decisions or surveillance activities (like frisking) aimed at only those travelers the algorithm has chosen. In unstructured big data, a pattern is made visible by datamining technologies and this pattern is used to predict other similar cases. Think of Netflix; offering you logical choices out of a massive number of movies and documentaries. Some of the same issues arise as in automated decisions, some of them differ; like how can we prevent the algorithms from bias? Who has the authority to ignore the predictions? Is transparency of algorithms helpful or put it differently; how could I as a citizen ever understand the algorithm? What if the computer finds a relation based on inherent human characteristics like health or genetics or race?

Dutch proposal executing GDPR

The difference has made the Dutch legislator propose different measures when it comes down to algorithms in public administration. The proposal has been subject to a public internet consultation before it will follow the procedural route.

Automated decision making in public administration will be permitted (and the right of data subject not to be subject to a decision overruled) if necessary for compliance with a legal obligation, or necessary for the performance of a task carried out in the public interest, and measures have been taking (by the controller) to safeguard legitimate interests of the data subject. As well the data subjects have (still) the right to ask the logic of the program on which the decision is based is shared. Other safeguards for the data subjects, like the right to ask for a review, are provided by General Administrative Law Act. To my knowledge, previous safeguards in directive 95/46 have never been invoked in disputes concerning automated decisions in administrative case law.

On automated profiling the Dutch legislator seems to stick to the rules of 22 GDRP, meaning that the right of the data subject not to be subject to a decision based on automated profiling (which produces legal effects concerning him or her or similarly significantly affects him or her) doesn’t apply if the decision is authorized by Dutch Law. So far, we seem to have one profiling project in public administration with legal basis; Syri. More info in English in this publication.

Other profiling initiatives in public administration seem -until today- not to lead to legal decisions. They are used to see which citizens or applies have to be checked by humans. It’s unlikely this kind of profiling match the definition in 22 GDPR. If the proposal of execution law of GDPR in Netherlands is accepted, automated profiles that do lead to invasive consequences but not to legal decisions -as is the case in border controls or police monitoring- will need a legal basis in law.

It will be interesting to see if some anti-fraud measures based on automated profiles are similarly as legal effects significantly affecting the data subject. Like deviating a digital apply for a benefit in a face to face apply because the data subject fits the profile of fraudster. Offering green lanes or denying them. Is it significantly (similarly as legal effects) affecting a person to have to proof his existence and papers at a desk or isn’t it? Based on the polls I took while teaching, at least most civil servants don’t consider this significantly affecting.

Why is the difference important?

It is important to stay alert on these differences when we discuss 22 GDRP. Some of the problems will seem to be the same; incorrect decisions, unknown algorithms, people who don’t fit in the predesigned categories and seek for individual fairness (Einzellfalgerechtigkeit) but it would be a mistake to treat the activities the same. Profiling is calculation of probability, and fundamentally different from a calculation of my administrative data with variables explicitly mentioned in law.

Profiling uses not only more (big) data, the data are more decontextualized, the data contain data we give away unconsciously (like using or not using an app the government provides for, like changing your online apply a few times before sending it) and they will only consider the past to predict the future. In light of the GDPR where a lot of exemptions are made for research and analyzing personal data, it will depend on how governments use profiles. Accountability for these profiles is not only important in relation to data protection. It will raise all kind of fundamental rights questions.

To conclude this interesting matter I could talk about for days, I’ve come up with this short and simple chart:

  Automated decision making Automated Profiling
function Engine of auto bureaucracy Accessory of bureaucracy



Structured personal data


Unstructured personal data




Calculation Calculation of probability
algorithms Written by humans based on law  

Selflearning or written by humans based on mathematical correlations




Administrative data as variables in algortihms based on law


Big data on all kind of aspects, including behavior.





Aimed at a specific citizen based on his/her administrative data (like age, income etc)



Based on data of people in similar circumstances, aimed to singling out.


Officially in the Netherlands since


1970s (social security, motor car tax)


2014 (SyRI, social security fraud detection)




Business rules


Risk rules / self learning


Bovens & Zouridis 2002

Bovens & S. Zouridis, ‘ From Street-Level to System-Level Bureaucracies: How Information and Communication Technology is Transforming Administrative Discretion and Constitutional Control’. Public Administration Review, 2002, vol. 62-No. 2.

Bing 2005

Jon Bing, ‘Code, Acces and Control’ in ‘Human Rights in the Digital Age’ M. Klang & A. Murray (eds), Cavendish Publishing Ltd 2005.

Wiese Schartum 2016

Dag Wiese Schartum, ‘Law and algorithms in the public domain.’ Etikk i praksis. Nordic Journal of Applied Ethics. 2016, 10 (1). DOI: http://dx.doi.org/10.5324/eip.v10i1.1973


ReNEUAL Model Rules Information Management; one step forward, ten steps back?

madonna1-565x560National administrative law is getting more and more influenced by European administrative law. This is a very interesting evolution. Although societies, culture and habits are different, many problems will be quite the same in every country. Governments and its civil servants will experience likewise demands from its citizens on service-levels and accessibility.

So, it makes sense to take the chance to cross the border and look for solutions you haven’t thought of before and share them. Not only to provide yourself with new perspectives, it’s this particular social capacity, as scientists have examined, that made humans able to develop as we did in comparison to primates.

Learning from other solutions or legal systems might essential, it’s also very inspirational. I am inspired a lot by European documents such as those on Good Administration and Convention 108 on data protection.

All the more reason to take an enthousiastic look at the ReNEUAL-project. ReNEUAL is the Research Network on EU Administrative Law. The objective of ReNEuAL is necessary aswell as ambitious:

ReNEUAL addresses the potential and the substantial need for simplification of EU administrative law, as the body of rules and principles governing the implementation of EU policies by EU institutions and Member States

However, I found myself not very fond of the ReNUEAL model rules on Information Management (Book VI)

Here’s why:

Decision-making is a data processing process.

This was already the conclusion of the late Pioneer Jon Bing in 1977. A pioneer in ABBA’s decade, but in 2015 we simply can’t afford us to ignore this important conclusion; we can’t study decision-making isolated from data processing. At least not if we pretend to offer solutions in contemporary execution of administrative laws.

If isolated, we will overlook essential ethical and legal issues. And guess what? That is exactly what the ReNUEAL model rules do. In these ‘books’ (mind the word book!) information management is divided not only from making rules (Book II) but aswell from single-case decision-making (Book III).

On making of rules it is already observed that there are many examples of regulations that are made in close relation to the state of the art technology. Automatic fines for speed driving is such an example. The customlaws in EU are designed to facilitate the technological possibilities on data processing. So, rule-making and information management could be studied as one phenomenom.

But of greater concern is the ignoration of the pair ‘decision making’ & ‘information management’. Because it will hinder us from making a frame-work that clarifies responsibilities, duties and rights for citizens.

It’s like building a house; the material that is available or affordable is fundamental for the result. You can’t build a real house only by studying its design. You can study the design. But that is not the same as building it, or, what I think administrative lawyers do, study the building. I see the ReNEUAL Book VI as studying bricks while model-rules would be more helpful and necessary to use, if was chosen to study buildings made with all used materials.

When not to wear retroglasses?

When Madonna was making her famous boattrip in Venice, IT-lawyers used to talk about databases. That was what the Convention 108 was all about. The idea back then was that we had to protect personal data by regulating databases. But in 2015 it is highly unlikely EU administrations only use databases.

Au contraire, I would like to state that the corporation within the EU member states and institutions would not be as efficient as they are today, if they still used databases! It’s an essential part of e-government that we can transfer data, link data and use data, undependable on time and locaties of the data or its users.

There are many examples, but it helps to think on projects like Frontex (http://frontex.europa.eu/) or a far more simple one; VIES (VAT Information Exchange System). Or one could start reading this publication called ‘The Migration  Machine’ in which the automation of the immigration process is observed and explained.  http://www.academia.edu/2740064/The_Migration_Machine)

Applications, systematics and connections like these made scientists in the Netherlands talk about iGovernment instead of e-Governement. In an influential and milestone publication Corien Prins and others coined the word iGovernment, with the i of Information,

to highlight the actual existence of a reality which is entirely different from the reality which currently figures on the political and administrative radar’.

It’s more then a pity that ReNEUAL didn’t use this perspective to come up with model rules. By using a retro version of the daily practice, lawyers will become out-casted for their unhelpful assistance. It will make it very easy for the administrations to ignore them completely. If so, we failed our responsibility to facilitate fairness in the relation Government and Citizen.


Which brings me to my final point. And the moment to mention one very painful question Richard Susskind asked lawyers;

To what problem are you the solution?

Lawyers, and surely administrative lawyers, are not the world’s most tech-gifted people. I am one of them so I feel entirely entitled to state this. But even if we were, our societies are too complex to look at and try to regulate it from one discipline.

The digital identity of Estonia for instance offers us a radical change on citizenship. Or even very close to the EU: free movement of persons in the EU; the movement may be physically very easy but in practice, your social security issues will be very difficult to handle alone: https://www.youtube.com/watch?v=r7ntV0P1A98

That is why all lawyers but should broaden their view and connect with other fields in science, like ITlaw, philosophy and public administration/organisational science. It will make the outcome more relevant and the process more open-minded. So, my dear fellow administrative lawyers; start thinking like Madonna; reinvent yourselves! And work together with others as Madonna does on every new album.


Richard Susskind challenges Dutch lawyers

Why are lawyers so irrational in rejecting thechnology? Very good question asked by Susskind today in Rotterdam. Lawyers should understand their work is no longer needed if they don’t change the way they work. Concentrate for instance on risk-management in stead of trying to focus on disputes. I hear Jon Bing’s words in this speech and am very pleased by it. http://www.scandinavianlaw.se/pdf/49-20.pdf

It would be useful if Dutch lawmakers might try to understand the meaning of this. Espacially the part on law in case technical failures prevent people to send in their documents or opinions on time. We will have them. All of us. Why do you even want to know if the technical failure is to blame the person itself, the provider or the server of the courts? Why not try to prevent new disputes by building tolerance for technical failures in the law. Do you want justice for the legal merits of the case? Or do you want to invent new legal thresholds? And for what reason?

Very disrupting to see that the audience not knows what/who Watson is!! They probably think it is a about Sherlock Holmes! It makes me wonder: should we tell them that they can use a magnificent searching tool to find out what Watson is?