In 2024, the Dutch city of Amsterdam ran a pilot with a machine learning model to make decisions about whether welfare applications needed an extra manual check to make sure they weren’t fraudulent. The model was trained by the city itself using past welfare applications to learn from. The city did a debiasing analysis, chose an algorithm that has a form of ‘explainability’, tried to immunise its civil servants against automation bias, and asked a top-notch consultancy firm and even investigative journalists for advice. Thus, it applied all the state-of-the-art ‘Responsible AI’ tools. However, the city failed to ask whether it befits a well-ordered society to use machine learning-based technology to make these types of decisions, ignoring the larger political implications of its pilot (Guo, Geiger, and Braun 2025).
This article explores the political philosophical implications of public institutions using machine learning-based predictive models to make decisions about people. The opening section (1) demarcates the type of automated decision-making we will be talking about, namely ‘predictive optimisation’, and briefly examines the fairness debate in computer science that has followed the implementation of this technology in the real world. The following section (2) introduces (contemporary) republicanism as a lens for looking at technology. This political philosophy takes freedom from domination as its starting point. The section after that (3) shows that predictive profiling–making distinctions–is an unavoidable yet domineering activity of public institutions. It outlines the contemporary republican conditions for controlling this domineering power. The final section (4) lists three problems contemporary republicans have with predictive optimisation as a specific form of predictive profiling: there is no common knowledge of the reasoning behind the decisions, it creates a democratic deficit through depoliticising the decision-making, and it doesn’t allow for meaningful contestation.
The article concludes that if freedom from domination is seen as an important normative concept, public institutions should prefer rule-based over machine learning-based decision-making. One promising direction to accomplish this is the work done to use machine learning to create simple rule-based models with a similar predictive accuracy to the complex machine learning-based models.
Public institutions have been using computers in their decision-making for around sixty years now (Peters 2016). Previously, these computers used simple rule-based algorithms, with the input being processed in discrete steps that could easily be replicated by some bureaucratic clerk with enough time and could be explained to the persons affected by the decision. With the advent of machine learning1, a new type of algorithm has been added to the toolbox of public institutions. These algorithms, also known as models or trained classifiers, are statistics-based, ‘trained’ on historical data. They can apply the generalisations they’ve learned from historical circumstances to new circumstances they haven’t seen before.
One particular use case of this type of machine learning algorithm is to make predictions and then use these predictions to make decisions about individuals.2 Wang et al. (2024) call this form of profiling ‘predictive optimisation’. Examples include deciding whether to release a defendant pre-trial based on a pre-trial risk assessment or deciding whether to hire somebody based on their expected job performance.
These predictions can be wrong in two ways: they can falsely add somebody to a predictive category (“this welfare applicant is likely to commit fraud”, and then the person turns out not to commit fraud), or they can fail to add somebody to a category (“this welfare applicant is not likely to commit fraud, and then they do commit fraud). Whether false positives or false negatives are more important to avoid depends on the domain and who you ask (Barocas, Hardt, and Narayanan 2023, chap. 4).
Trying to avoid these types of mistakes, especially the discriminatory bias that could be inherent in these mistakes, has spawned a new field in computer science. Over the last few years, a whole apparatus with new concepts has been created to quantify the mistakes that are in these machine learning-based automated decisions. The main value that this discourse tries to optimise for is fairness, using an egalitarian perspective on distributing harm, and with Rawls sometimes described as its favourite political philosopher (Procaccia 2019). For example, some researchers are trying to operationalise Rawls’s “maximin” principle3 into code (Barsotti and Koçer 2024), while others are trying to see whether it makes sense to apply his difference principle4 to individual situations (Franke 2024), and his ideas on distributive justice are used to give criteria for algorithmic fairness a form of normative substance (Barocas, Hardt, and Narayanan 2023, chap. 3).
The major limitation of these related approaches to fairness is their focus on how decision-making affects the individual (albeit in relation to decisions about other individuals). The issue at stake in the fairness approach is finding out whether there is an unwarranted discriminatory bias against individuals belonging to particular (protected) groups.5 Put differently, and as shown in Amsterdam’s example in the introduction, a lot of thought is put into making sure the decisions themselves are right, and less thought is given to whether this is the right approach to decision-making in the first place. Due to this individual focus, the collective effects of the automated decisions, and the related political problems, get less attention. This article aims to shift focus by applying a contemporary republican lens to predictive optimisation by public institutions.
The classic liberal conception of freedom is freedom from interference (Mill 2015; Berlin 2002). Simply put, you are free if nobody prevents you from doing what you want to do. Republican philosophers consider this conception of freedom to be lacking something. For them, it isn’t just actual interference with your choices or actions that can lead to you not being free; the potential for interference can already be freedom-limiting.6 For a republican, true freedom requires an absence of the possibility of arbitrary or, more precisely, uncontrolled interference. In short, an absence of domination (Pettit 1997; Lovett 2022).7
Lovett describes domination and suitable control as follows (2022, 120):
Republicans hold that people are dominated to the extent that other persons or groups have an uncontrolled ability to frustrate their choices. Such abilities are suitably controlled, and thus not dominating, on the republican view, when those types of persons or groups that would exercise such abilities in spite of all standing constraints of public law, policy, and custom are ignorable.
A republican point of view instantly shifts the perspective from a focus on the contingency of free choice (can I do what I want to do?) to a structural relationship (is there somebody who can limit what I want to do, am I dependent on the will of another?). Freedom, in this sense, is less about free choice than about being a free person with equal status (Gädeke 2024).
Recently, contemporary republicanism has been used as a fruitful lens in political philosophy to address some of the consequences of our current digital predicament (Graf 2017; Hoye and Monaghan 2018; Susskind 2022; Sager 2023; Hoeksema 2023). Using the lens of freedom as non-domination, it is much easier to show why our relationship with big tech–with its uncontrolled power over our digital lives–is limiting our freedom than when you take freedom from non-interference as your starting point. The current European focus on increasing ‘digital sovereignty’ is spurred on by our wish not to be dependent on the unreliable will of others. This article aims to extend this work by using contemporary republicanism as a lens to look at predictive optimisation and to resolve some of the practical doubt that comes with this type of technology.8
To be able to do that, we need to take a closer look at what constitutes uncontrolled power from a contemporary republican perspective. A public institution making decisions about people will always exercise a form of potentially dominating power. Ensuring that this exercise of power is controlled can roughly be done in two ways: through procedural controls (rule of law) and through democratic controls (popular control).
A rule of law protects us against the coercive force of others. Only with the help of law (and its enforcement) can we gain the expectation that others won’t be able to arbitrarily frustrate our choices. For this to be a well-functioning control on power, republicans think it is important that the laws are clear, consistent, published in advance, and don’t change too often.
Even in a situation with a rule of law, public officials can ignore the law or fail to enforce it. Just assuming that public officials will do the right thing offers insufficient protection against their potential for domination. There has to be a form of popular control, the ability to dismiss public officials and replace them with others. Most republicans would argue for a democratic form of control.
Together, these controls are constitutive of freedom. There is no freedom from domination without them.
The increased use of predictive optimisation has engendered much critique of algorithmic profiling in general. The many examples of predictive optimisation’s disparate impact on people, especially at the margins, are dire (see, for example, Eubanks 2018; Noble 2018; O’Neil 2016). Wrongful discrimination, or the making of spurious inferences, also existed before the algorithmic turn. Algorithms mainly automate and thus scale up the problems to the full population that is subject to them.
However, if a public institution has to make distributive or allocation decisions about people, then it is practically impossible to avoid a form of profiling. As Schauer (2006) makes very clear, if you need to make a decision about somebody (decide if they need the benefit or not, or if they are a danger to society or not), you cannot avoid making generalisations; you have to profile in some way, looking at particular characteristics of people to put them in a category. This profiling is unavoidable as long as there are fewer categories to put people into than there are people. Often, empirical data from the past is used to create these ‘profiles’, grouping people who are similar in some way.
An important rule for the legitimacy and fairness of public institutions is that they treat like cases alike in their decision-making. Schauer (2006) argues that in the real world, cases are never alike. For him, a bulwark against arbitrariness in decision-making is actually treating unlike cases alike. He understands that a simple rule applied to groups of people can feel very arbitrary. Imagine only getting benefits if you earn less than a certain amount and then earning just slightly more than this amount, effectively disqualifying you. This is, of course, unfortunate. Yet, a moment of thought makes you realise that you can’t avoid drawing a line somewhere and that it will always be difficult for people to accept the decision when they are on the wrong side but close to the line. At the same time, the advantage of an explicit line that is the same for everybody, so in the form of a clear rule, is that it can provide much clarity about the decision, especially if it is based on some salient empirical facts. Another advantage is that an explicit line allows for contestability.
Predictive profiling by public institutions is unavoidable, but it is also inherently domineering. Firstly, prediction takes away options, and secondly, predictive profiling creates an asymmetry of power. I will expand on both of these in turn.
Anybody who creates predictions about others and has the ability to act on these predictions in relation to these others effectively limits their options. Prediction looks at a subset of possible futures and limits the options for acting to that particular subset. Being put in a category by a prediction foregoes options. Because many predictions lead to actions that influence the outcomes, you could even argue that predictions occasionally create (in the self-fulfilling sense) or eliminate (through preventative measures) the futures they are predicting (Khosrowi, Ahlers, and Basshuysen 2025). Prediction interferes with the future.
Predictive profiling creates a profiler and a profiled, with a power imbalance between them. The profiler is in a position to dominate. Hong (2023, 1) argues that prediction is “not primarily a technological means for knowing future outcomes, but a social model for extracting and concentrating discretionary power.” Prediction, in Hong’s sense, governs the allocation of who has the ability to define and decide how things go. Making the future predictable from the perspective of a public authority can often create a sense of unpredictability for those whose future is being predicted. Amsterdam’s predictive profiling pilot did exactly that. The city had all the data and access to the predictive model, while from the perspective of the citizens of Amsterdam, their welfare applications turned into a tombola.
It is now clear that it is impossible to avoid occasional discretionary decision-making when making predictions and that predictive profiling is inherently domineering. It is, therefore, important to control that power, both procedurally and democratically.
Looking back at the contemporary republican demands to ensure freedom in section 2, we can shortly outline some minimal conditions for controlling the domineering power inherent in the predictive profiling of public institutions.
Firstly, from the perspective of a rule of law, it is important that the rules used to do the profiling are clear, that there is common knowledge about the rules and their application, and thus that the rules don’t change too often.
Then, there should be a form of popular control over the profiling rules by the people to whom the rules are applied. We can assume a reasonable level of pluralism in society (Rawls 2005, 54–57), and so important decisions about profiling, for example, which individual characteristics are allowable to be profiled on, need to be settled in the political arena, not through fully technocratic means. The optimisation logic from the technology domain can’t just be applied to the socio-political domain without a proper democratic justification.
Finally, if all else fails, there needs to be a form of responsive control on the profiling. It should be possible to meaningfully contest a profiling decision by a public institution if it seems that the profiling is spurious or incorrect.
Now that we have broadly outlined the contemporary republican conditions for controlled power in the context of predictive profiling, we can check whether predictive optimisation, a form of machine learning-based decision-making, fulfils these conditions. It doesn’t. Three problems with predictive optimisation undermine political freedom: it is impossible for common knowledge about the reasoning behind the decisions to exist, it is a technocratic form of governance that depoliticises public decision-making, and it doesn’t allow for a meaningful form of contestation.
It is important to realise that the machine learning models used for predictive optimisation differ in at least one important respect from regular rule-based algorithmic models: how the former work is fundamentally unknowable.
This opacity of machine learning models is not because of organisational secrecy or a lack of technical understanding by the people looking at the models. According to Burrel (2016), it is an opacity that arises from “the characteristics of machine learning algorithms and the scale required to apply them usefully.” Calls or even regulations for more transparency won’t help solve the problem in the case of these particular types of algorithmic systems. Burrel (2016) writes: “When a computer learns and consequently builds its own representation of a classification decision, it does so without regard for human comprehension.”
An analysis by Lighthouse Reports of using a machine learning system to attempt to predict fraud by CNAF, the agency responsible for France’s social security system, makes this incomprehensibility starkly clear. The number of months since the last email was sent to CNAF by an applicant is an input variable to the system. Romain et al. (2023) found out that this variable can take three values:
If it has been less than two months since the beneficiary has sent an email to the CNAF in the last 18 months, their risk score moves down. If the last email they sent was between 3 and 4 months ago, their score moves up. The minute it has been 5 months, instead of 4 months, since they last sent an email their score again decreases.
We cannot understand these statistical apparent regularities with our human capacities. We precisely apply machine learning for predictive profiling in order for the model to recognise patterns (or statistical regularities) in the historical data that we as humans are not able to see by ourselves. According to Amoore (2023, 26), machine learning models exist “to generate outputs that are in excess of the formulation of rules, something not determinable in advance.” This also means that a model can’t give reasons for its decisions.
The current academic work on the ‘explainability’ or ‘interpretability’ of machine learning models (Saeed and Omlin 2023) does not change this fact. These methods try to gain an understanding of how the models work in different ways (for example, through generating counterfactuals that lead to different outcomes or through showing which of the input variables had the most influence on the outcomes). However, in the end, these methods are all a form of reverse engineering a black box, slapping human interpretations on the oracular nature of these machines.
Arbitrariness can ‘hide’ inside these machine learning models. One way of showing this is by focusing on their so-called prompt specificity. Very small changes to a model’s input can lead to significant changes in the output (Schneier and Sanders 2025). In translation, for example, we know that just adding a full stop to the original text (a change of syntax) can lead to big changes to the content of the translation (a change in semantics) (Jwalapuram 2023).
In the case of predictive optimisation, input specificity can also be a problem. The specific input relating to an individual may trigger something in the model that it ‘learned’ from some contingent and essentially spurious artefact in the historical data on which the model was trained. Tooling for debiasing will likely never be able to find these problems. In a very fundamental way, we can’t know whether these individual problems exist, and there is no real way to recognise them when they happen.
Finally, we must know that these machine learning models are not static. Because the world keeps changing, the judicious use of models requires them to keep changing, too (Campolo and Schwerzmann 2023). One of the ways that people can be protected against the domineering powers of the state is through the fact that the rules in a society don’t change too often. Imagine what it would do to your freedom if the traffic rules for right-of-way were changed often and at unexpected, arbitrary times. This is similar to the type of authority that is exercised by predictive optimisation algorithms.
From a contemporary republican perspective, all of this is deeply problematic. One of the ways in which power can be controlled is through a common knowledge of the reasoning behind decision-making and a trust that public authorities will stick to these reasons. As we have seen, neither is the case for machine learning-based predictive optimisation. It is, therefore, not surprising that citizens feel completely subjected to and at the mercy of these systems whenever they encounter them.
The logic of optimisation inherent in much of the machine learning-based decision-making hides many of the elements of the decision that normally would be part of a political discussion. Campolo and Schwerzmann (2023) write how with rules, we tend to understand their “constructedness”, but that models that come out of machine learning on historical data are often seen to refer to near-natural regularities. Through what they call ‘artificial naturalism’, machine learning-based decision-making invites depoliticisation.
A very clear example of this can be found in the literature around the COMPAS algorithm, used in certain parts of the United States to try to predict the chance of recidivism of people found guilty of a crime and help judges in their sentencing decisions. COMPAS first became prominent after investigative journalists at ProPublica found racial bias in how it functioned (Angwin et al. 2016). The software makers disagreed and shared their analysis, in which there was no racial discrimination in the algorithm (Dieterich, Mendoza, and Brennan 2016). Academics were then quick to point out that both parties used different definitions of fairness (Chouldechova 2017; Barocas, Hardt, and Narayanan 2023, chap. 3).
For outsiders, COMPAS functions as a black box. All the analysis by ProPublica was done by comparing the predicted risk of recidivism (the output of the algorithm) and the actual recidivism (as it happened in the real world), and augmenting this data with the race classifications used by the County Sheriff’s office (Larson et al. 2016). Even the algorithm’s creators likely have no real understanding of the correlational and statistical space that the model inhabits.
Rudin (2019, fig. 3) has used the same COMPAS data to create a much simpler and rule-based prediction model for recidivism with similar predictive accuracy as COMPAS. According to her model, you are only likely to re-offend if:
Your age is between 18-20, and your sex is male; if your age is between 21-23 and you have 2-3 priors; or if you have more than three priors.
Note that this assumes that if you are not a registered male, you will only re-offend if you have more than three priors or are aged between 21-23 and have more than one prior. This clarity allows for a political discussion (and probably necessitates a legal one) about whether, as a society, we think it is legitimate and justifiable to make these assumptions.
To be clear, the COMPAS model makes similar assumptions to the one above, but in endlessly more dimensions, and it hides them completely. The only political discussion that can be had about COMPAS is whether certain variables can be used as its input.9 However, nothing can be said about how that input should be valued.
This effectively takes predictive optimisation algorithms like COMPAS out of the realm of political deliberation, replacing a potential democratic form of governance with a technocratic one which only tries to optimise the accuracy while (at its most ambitious) minimising the bias for a limited protected set of categories. This can be seen as a “sphere transgression”, with the values of technology companies being applied in the political domain (Stevens, Kraaijeveld, and Sharon 2024).
The people who are subjected to decisions made or informed by these algorithms have no means to be part of a discussion about how they (should) work in a meaningful way. Amoore (2023) calls this a machine learning political order, which no longer allows for a plurality of solutions to difficult political questions.
There is, therefore, no meaningful form of democratic control over the power these machine learning models exercise. This lack of democratic control remains a problem, even if the models’ predictions are highly accurate.
Contemporary republicans consider ‘responsive control’ an important means for controlling domineering power, especially in its discretionary form. Responsive control is the ability to respond in cases where there is unwanted interference with a decision (Lovett 2022). In the context of the design of AI systems, this type of responsive control is now often called ‘contestation’ (for example, in Alfrink et al. (2024) and Henin and Le Métayer (2021)).
Campolo and Schwerzmann (2023, 8) explain why contestation is difficult when it comes to machine learning-based decision-making (which they call “example-based reasoning”):
The bureaucratic rules associated with the rational type of authority […] apply to every individual in the same way […] so that individuals can orient themselves in relation to the rules and potentially contest them. Bureaucratic and computational rules point back to a foundational moment of prescriptive specification or encoding, which can be contested. By contrast, being ruled by examples means never accessing the implicit, experimental norms elicited from examples and constantly updated through the optimisation of the model.
In the case of predictive optimisation, it is still possible to object to and correct certain mistakes. For example, if the data that serves as input to the machine learning model is incorrect. And it is, of course, always possible to object to a decision of the model, but, in a very fundamental way, you would be lacking the arguments to do that.
It is important to realise that this lack of motivated contestability is intrinsic to predictive optimisation. Rather than applying the same rule to all individuals, predictive optimisation essentially creates a set of personalised rules for each individual. Individualised rules are, in that sense, uncontestable by definition.
We have seen how predictive optimisation does not fulfil the basic conditions of contemporary republicans for predictive profiling. The rules implicit in the profiling are not clear, there is no popular or democratic control over these rules, and meaningful contestation is not possible.
We can conclude that machine learning-based predictive optimisation is a problematic form of decision-making from a contemporary republican perspective. If we think freedom in the republican sense of nondomination is important, we should be very reluctant to apply predictive optimisation inside public institutions. The higher the stakes of the decision, the more important it is to avoid machine learning-based decision-making. It is probably prudent not to make too many assumptions about the stakes of the decision, as Munch, Bjerring, and Mainz (2024) have convincingly shown that low-stakes decisions can easily become part of a pattern or aggregate of decisions that together are high stakes.
The three republican problems with machine learning-based predictive optimisation identified in this article do not exist for rule-based decision-making, even when these rules are implemented in algorithms. Rule-based algorithms have explicit rules, which allows for both democratic control and for meaningful contestation. A contemporary republican will, therefore, strongly prefer rule-based over machine learning-based automated decision-making.
One interesting approach that might merit more awareness from a republican point of view is the work done on the ‘Rashomon Effect’10, for example, in Rudin et al. (2024). Often, it is algorithmically possible to simplify a machine learning-based predictive algorithm into a dedicated rule-based one that is interpretable in that humans can easily understand exactly how it works. In many cases, this doesn’t negatively affect the model’s predictive accuracy. I would argue that, based on the republican problems with machine learning-based reasoning, it is imperative for public institutions to try and see if they can also solve their problems with these “simple-yet-accurate models” as Rudin et al. call them (2024).
A switch to rule-based reasoning allows for common knowledge regarding decision rationales, avoids the technocratic depoliticisation of governance, and makes meaningful forms of contestation possible.
However, we shouldn’t forget that even though a rule-based algorithm is probably necessary for a freedom-respecting automated decision, in practice, a rule-based algorithm will often not be sufficient to guarantee political freedom in the contemporary republican sense. Rules can, of course, also be used to dominate others in the worst possible way (Mbembe 2019). And monitoring the efficacy of any rule-based algorithm continues to be paramount. In the context of predictive profiling, it behoves us to never forget that predicting the future is hard, if nigh impossible (Narayanan and Kapoor 2024, chap. 3).
The writing of this article was helped (at different stages of its maturity) by the conversations I’ve had with Guilel Treiber, Marcel Becker, Marjolein Lanzing, Pascal Wiggers, Donald Karabotsos, and Bart Jacobs.
I had the opportunity of presenting and discussing an earlier draft of the article at the ERATO Metamathematics for Systems Design Project led by Ichiro Hasuo (National Institute of Informatics, Tokyo), which helped to sharpen its argument.
Anthropic’s Claude AI was used to write the abstract of this article based on its full text. For the main text, no AI was used except for Grammarly to check the spelling and grammar.
This publication is part of the project A Neorepublican Perspective on Automated Decision‐making, which is (partly) funded by the Dutch Research Council (NWO) as part of their Doctoral Grant for Teachers.
Hans de Zwart is the sole author.
There are no competing interests to declare.
Now, often just called ‘AI’ in the parlance of our times.↩︎
There is a debate, mostly in the legal domain, about what constitutes an automated ‘decision’. This is because only decisions can be contested in court. I stay away from that debate, and mean by an automated decision anything about people produced by a computer and then used in some way.↩︎
A decision-making strategy where you “maximise the minimum” in order to make the worst outcome as good as possible (Rawls 1971, 150–61).↩︎
The idea that you should only allow an unequal distribution if it makes things better for the ones who are worst off (Rawls 1971, 78).↩︎
This approach has to assume that there is an unbiased ‘ground truth’ that is accessible to test the predictions against. This assumption might not always be warranted.↩︎
The classic example to illustrate why freedom from interference doesn’t suffice is the non-interfering master and the enslaved person. I intentionally avoid that example as I am uncomfortable writing about our current political context using these terms. Pettit has created a solid alternative in Nora and her doting and non-interfering husband, Torvald, from Ibsen’s play A Doll’s House (2016). However, his example assumes too much knowledge about the play to work for all readers. In the academic context, we should maybe use the example of a young scholar being dependent on the whims of a senior professor (Viroli 2002, 36). The freedom of this scholar is hampered by the senior professor’s uncontrolled power, whether the professor decides to use this power or not.↩︎
A note on terminology: ‘republicanism’ doesn’t have anything to do with the American Republican party. Different authors use different names for the current republican thinking in political philosophy, e.g. ‘neorepublicanism’ or ‘civic republicanism’. I will follow Lovett (2022) and use the term ‘contemporary republicanism’. With that, I refer to the philosophical field of study that follows from what Lovett calls the ‘central writings’: Pettit (1997, 2012, 2014), Skinner (1986, 1998), and Viroli (2002). When I refer to a commitment I presume any republican to hold, I sometimes use the term ‘republican’.↩︎
As Lovett writes (2022, 20): “Political theories that fail to provide practical guidance, either by being too utopian or too acquiescent, are pointless theories.”↩︎
In a limited way, as nearly all the variables that are used as input to the model will in some way correlate with variables that are excluded from the input, thus serving as potential proxies in the machine learning model.↩︎
In the computer science literature, this is also referred to as the ‘multiplicity’ of models.↩︎