The Chilling Future of Artificial Intelligence in the Workplace

by RJ Quinn

25 May 2018

In Frederick Winslow Taylor’s 1911 book The Principles of Scientific Management, Taylor made a ‘scientific’ case for having a tightly controlled hierarchy in the workplace. Workers, he believed, were too unpredictable – too prone to come together to fight against the interests of management and shareholders – to be left to their own devices.

Although the technologies of workplace power – by which I mean the methodologies used by bosses to exert control over workers – are changing constantly, workplace power itself has barely changed at all. While steam whistles and draconian foremen have largely been replaced by climbing walls, snacks, and free lunches, the screws of control, however, remain just as tight.

Case in point: 33% of major companies in the developed world now use some form of artificial intelligence (AI) in their applicant screening process. HireVue is one such company. With a software that analyses 250,000 different data points from a single video and creates a score by which recruiters can filter out applicants without ever seeing their video, they even claim to be able to accurately predict future job performance.

AI systems are particularly useful for dealing with unstructured data. This means the software is not merely parsing the interviewee’s words – “I love working in teams,” or “sometimes I just push myself too hard,” – and comparing the response to a predetermined right answer, its also analysing the more random and disorganised elements of their performance – breathing, tone of voice, facial movements. Using this information, the machine examines patterns based on existing datasets, looking at how these things have correlated with previous job performance, and updating its parameters in real-time as new information is fed in.

In theory, an argument could be made for services like this tamping out bias in the recruiting process. But this is only true in a very limited sense. While a senior executive is more likely to feel undue fellowship for a fellow Etonian than a machine learning algorithm would, this only holds up if the machine itself is unbiased.

In this context, AI is basically a large inferential engine. It knows what results certain combinations of factors have produced in the past and can learn those patterns to drive outcomes in that direction in the future. If a system decided, for example, that a specific race of people were worse workers, it would systematically rule them out. The parameters of what is ‘good’ are still set by humans and the data sets in use are still based on historical performance evaluations. The machine and the humans that programme it – therefore determining the choices it will make – are inseparable.

This human impact can manifest itself in strikingly different ways. For example, one company using HireVue reported having a more diverse talent pool than in previous years. In another, however, an AI programme designed to remove human bias in sentencing decisions in the US ended up handing out harsher sentences to black men at an almost 2:1 rate.

In Taylor’s wildest dreams, he could conceive only of perfectly imposed external discipline. The best he could hope for was quiescent employees who bought into the idea that the prosperity of their workplace was also their prosperity. But its in the gaps between control – where we have the option to do things we shouldn’t – that we can carve out something for ourselves.

The information we choose to present is structured: when we talk, emote, or move in ways we decide. Sometimes we may have two thoughts we want to express, and we choose to express one over the other – perhaps in the service of some goal, like getting a job so as not to starve. AI, on the other hand,  looks at unstructured data – the presentation of which is largely out of our control. Maybe a smile is not so broad, or a tone not so bright. As interviewees, we do not know the 250 000 criteria by which we are being judged. Even if we did, we could not possibly manage all of them.

Smiley-faced AI Taylorism offers a much more chilling vision than Taylor ever could have imagined, where applicants are forced not only to discipline their actions, but to police themselves from within.

Build
 people-
  powered
   media.

Build people-powered media.

We’re up against huge power and influence. Our supporters keep us entirely free to access. We don’t have any ad partnerships or sponsored content.

Donate one hour’s wage per month—or whatever you can afford—today.

We’re up against huge power and influence. Our supporters keep us entirely free to access. We don’t have any ad partnerships or sponsored content.

Donate one hour’s wage per month—or whatever you can afford—today.