‘Fuck the Algorithm’: How A-Level Students Have Shown the Future of Protest

by James Meadway

17 August 2020

Jason Cairnduff, Reuters Connect

Protesting students and a national outcry have forced a U-turn over A-level marking in Scotland, and subsequently Wales, England and Northern Ireland in quick succession, despite the UK government’s public belligerence at the prospect of revising students’ algorithmically-moderated grades. The modelling that had been applied will now be scrapped in favour of teachers’ assessments for both A-level and GSCE students. It’s a major victory for the thousands of students who have protested.

The anger has been completely justified. The model that had been applied appears to have been designed to produce results that, on the headline measures, looked ‘fair’ (in that the averages could look about right) whilst forgetting that the distribution of overall marks was itself the product of many thousands of individual results. Throw in the fact that searching for an average based on historic performance inevitably tends to favour those who have historically done well – which, in the British case, has privileged the private sector – and the stage was set for a spectacular government failure. It’s now clear that education secretary Gavin Williamson, “promoted beyond his competence”, needs to go.

Beyond the specific government failures here, however, such as the lack of oversight and review – including attempting to silence competent professionals – the fiasco and the protests indicate a trend for the future.

Since the Covid-19 outbreak erupted, fundamentally (and permanently) disrupting how we live and work, the presence of data and statistical modelling in all our lives has accelerated markedly. From working-from-home on one side to increased biosecurity surveillance on the other (itself ranging from contact-tracing apps to temperature monitoring to “pandemic drones) – the weight of data in our everyday lives has dramatically increased.

We’ve all become very familiar with this ghostly digital presence in the decade since the Great Financial Crisis, as (mainly US) Big Data companies have exercised their extraordinary capacities to gather, store and analyse our data, resulting in immense gains for them and an increasingly data-saturated world for the rest of us.

One way or another, the fact of algorithmic prediction has become an accepted part of how we live, most obviously online in the form of recommendation algorithms. One set of consequences – surveillance capitalism’s insatiable greed for the data we produce – is becoming better-known. The hunger is driven by the raw economics of the digital economy: each dataset that can be obtained is worth more if it can be compared with another dataset, so the value of a data company is always maximised by grabbing as much data as it can.

Usually, we don’t notice the algorithms that are used to do this. The entire purpose of those used for behavioural analysis is to forecast, as far as possible, the actions of individuals on the basis of past data. Increasingly, they are also intended to shape the behaviour of individuals in particular ways – to guide us to specific YouTube videos or Facebook advertisers or whatever. They might also be used to shape our political beliefs and preferences, as we have seen.

The critical issue with exam results is that this method of algorithmic working was bound to fail. On average, the algorithm may be more or less correct, but the ‘average student’ does not really exist – it’s a statistical fiction, generated from data which describes many thousands of individuals, none of whom are the ‘average’. Specifically for exams, moreover, there are individual people who we expect to be judged against the criteria of both their own performance and some objective standard – not against the performance of the average, either today or historically. In situations like marking exams we have specific expectations of individual autonomy and the recognition of individual merit which statistical techniques tend to override.

So what we can tolerate for, say, targeted advertising, we find intolerable for exams, which have precisely the worst possible combination – from the point of view of algorithmic processing – of four factors: being applied en masse, in public, where specific individualised results are required, and where the results for an individual are supposed to say something about their merit or worth in a particular dimension with meaningful consequences.

(Obviously, exam results aren’t the only dimension we might judge people on, and mercifully, we tend not to filter any of the others through any sort of marking scheme – although, inevitably, one UK government advisor is at least interested in China-style social credit systems.)

Environmental instability.

The combination of the first three factors – mass processing, public display, individualised results – makes the case for protests clear, and the last – that the results should bear some relationship to true merit – makes it desirable. Most algorithmic processing will have only one or two of these four factors: the fact that Facebook is choosing to display some adverts to you on the basis of the statistical assumptions it makes about you is certainly individualised, but it is also not something intended for wider social comparison. Most algorithmic processing will not result in protests; as it becomes ubiquitous, we may be largely unaware it is even happening, and we may not care too much.

But one thing we have learned from this pandemic is that environmental instability – of which Covid-19 is a profound example – brings with it a deeper and deeper dependency on data. The semblance of accounting and control that Big Data provides, built-in to the business models of our data economy and increasingly factored into the functioning of government, becomes more – not less – appealing in unstable conditions.

The social structures we currently use to manage our glut of digital information – principally the giant tech companies – have every incentive to maximise their reach across social life, given the blunt economics of data: more data means more value, so grab more data. And governments, confronted by conditions over which they have increasingly little control or sway, facing populations whose cynicism about government itself remains at historically high levels, have every incentive to try to utilise mass data techniques themselves. The (relatively unsophisticated) A-level results modelling was introduced, for instance, precisely because the pandemic had resulted in the cancellation of the actual exams: it was an attempt to cope with contingency on the basis of a forecasting model.

Put these two elements together – the raw economics of Big Data that drive its expansion, and the desire of governments to try to assert some control in situations of instability – and the likelihood is that algorithmic management becomes more common as part of how government operates, not less. And of course to the extent that government data is immensely valuable, like the treasure trove of NHS data, Big Data will be more than happy to assist governments in making use of it. Michael Gove’s recent speech on the future of government, for example, explicitly highlighted the need to “open up” government data in this fashion. The direction of travel is clear, and – under current circumstances – instability will accelerate us along it.

Politicisation.

But as the techniques of modelling and forecasting become a more significant part of government, they become politicised. Alongside our belief in the autonomy of individuals and the belief that they ought to be assessed according to their own merits is a belief that government should be fair and transparent.

We have built entire systems of governance and rule around roughly those ideas: the legal system depends on this principle – that the assessments it produces are fair because they are delivered on the basis of evidence that is seen, and made on the basis of the individual standing trial or settling a dispute. We elect governments in a process that hides our personal choice, but which is intended to provide scrutiny and transparency of whatever government then emerges. Both systems may fail, but they fail relative to that approximate (and widely-held) ideal.

Statistical modelling, particularly as it becomes more sophisticated, does not work like this.

It is hard for us to understand even a relatively simple model, such as that used for the A-level results. (The Royal Statistical Society’s letter to the Office for Statistics Regulation is a good guide to the problems, however.) By the time very large datasets are being used, particularly in machine learning, the results that are produced may become literally indecipherable – they are, in the jargon, not ‘interpretable’. It is not possible to see why a large statistical model produces a result, and nor is it possible for the computer – unlike a judge or a politician, say – to explain why it reached a particular conclusion. Increasingly sophisticated machine learning means that algorithms are getting better at accounting for individual nuance. But if what they are going on is past behaviour, they still start to hem in future choices and can produce radically unfair outcomes.

Again, we might tolerate this in much of our online life. The fact a particular shop is being advertised to you in particular probably doesn’t matter too much. But if the decision-making process starts to intrude on questions of underlying value, or where the outcome has profound consequences on your life, it matters a great deal. And if it is the government making those decisions, the clash between our expectations of fairness and the actual results produced by government may become profound. This is the moment of politicisation: once a procedure is moved from the realm of the mundane, or from where a market can be blamed for an outcome, and into the realm of what we think of as government, it is open to political protest.

(I’m reminded somewhat of the politicisation of the labour market that took place in the West during the post-war boom. Once governments broke with liberal capitalism and accepted some responsibility for the management of labour, it politicised the question of how labour was managed. The early years of neoliberalism in the West were, in part, an attempt to break out of this problem by having states refuse to accept this responsibility. For the regimes in the East, the problem was chronic: the attempt by government to set the conditions in every market, including labour, meant everything was always the government’s problem. Every strike suddenly took on a political character.)

We have already seen multiplying protests during the pandemic, from Black Lives Matter to an uptick in strikes. We have also seen legal objections being raised, successfully, to the use of automated facial recognition, and we should expect further legal challenges to the encroachment of algorithmic methods in future. But what the A-level protests point towards are the opening rounds of a new form of protest, against a new style of government: one that appeals directly to our faith in fair, transparent, and human-centred processes, on one side, and against the opacity and unfairness of statistically-determined outcomes on the other.

Or, as the protestors put it, more succinctly: “fuck the algorithm”.

James Meadway is an economist and Novara Media columnist.

Build
 people-
  powered
   media.

Build people-powered media.

We’re up against huge power and influence. Our supporters keep us entirely free to access. We don’t have any ad partnerships or sponsored content.

Donate one hour’s wage per month—or whatever you can afford—today.

We’re up against huge power and influence. Our supporters keep us entirely free to access. We don’t have any ad partnerships or sponsored content.

Donate one hour’s wage per month—or whatever you can afford—today.