AI Border Guards are Being Tested at the Edge of Fortress Europe, Away From Public Scrutiny

by Robbie Warin

4 December 2019

Bronte Dow

A series of trials funded by the EU to the tune of €4.5 million ended this summer, without most people ever even having heard about them. At three points on the EU’s external border, a new form of border protection using lie-detecting artificial intelligence has been tested. The European Commission is being sued for allowing the company behind the trails to keep its methods and results away from public scrutiny.

For nine months, travellers passing through the Hungarian-Serbian border, the Latvian-Russian border and the Greek-North Macedonian border, were given the option to participate in a video call with a virtual border guard prior to their crossing.

The avatar would ask them a series of questions, from confirming their identity to describing the contents of their luggage, whilst recording and analysing their facial movements to determine the likelihood that they were lying.

This software, called iBorderCtrl and funded by the EU’s Horizon 2020 fund, is the latest in a stream of technologies being spearheaded by companies and governments around the world counting on the promise of AI – in this case Machine Learning – to solve social and political problems.

The funding and testing of this technology comes amid an ongoing humanitarian crisis at the EU border, with thousands of migrants in transit arriving into Greece, Italy or via the EU’s land border in the Balkans.

This crisis, often framed as an existential threat to the EU, has contributed to the rise of far right parties across Europe, from Germany to the UK, who have capitalised on a rhetoric around the failure of incumbents to deal with illegal migration; making the development of technologies to reduce illegal migration and defend fortress Europe a politically saleable way forward.

How much evidence is there on the performance of these technologies? And what are the implications surrounding the extension of these technologies into the most sensitive areas of our political landscape?

Lie-detecting AI border guard avatars.

iBorderCtrl utilises a technology developed by the UK firm Silent Talker, which claims to be able to detect the emotional state accompanied with lying by looking at a person’s unconscious facial gestures.

The idea that you can determine emotional states from the analysis of the face stretches back to the 1880’s with thinkers including Charles Darwin – who theorised that emotions were universal to a species – and the first attempts to create taxonomies of emotion.

In its modern incarnation, emotional AI found its forefather in the American Psychologist Paul Ekman. Ekman developed the thesis that through empirical research you could reliably link facial expressions with emotions and created a catalogue of 3,000 facial expressions and their emotional counterpart. Through his work, Ekman developed six basic emotions, which he argued were universal across cultures: anger, disgust, fear, happiness, sadness and surprise.

The real step change came with the development of Machine Learning and its application to the detection and analysis of facial expressions. Rosa Picard, a professor at MIT, first coined the term ‘Affective Computing’ and it was her work that set the stage for the now booming ‘Emotional AI’ industry, which is prefaced on the idea that computer algorithms are able to detect relationships between facial gestures and emotional states in a way that humans never would.

iBorderCtrl analyses 40 different facial gestures, pulling these together to create a score out of a hundred as to the likelihood that an individual is being deceitful, a score that is presented via a QR code to a human border official.

In a report published by the iBorderCtrl project, they reported that they were able to detect when a person was lying at a rate of 74% and when someone was telling the truth at 76%.

Bias and public scrutiny.

The project however has not been without contention and has received a range of criticisms, with many claiming that the ability to determine deceit from facial movements is a myth.

“In a highly charged environment like a border, the idea that you can analyse facial gestures from a single camera to determine the likelihood that someone is lying is deeply controversial.”

Andrew McStay, author of ‘Emotional AI: The Rise of Empathic Media’, says: “A lot of people will tell you this is pseudoscience and I would push back against that. It’s not that emotional AI is fundamentally flawed but, in a highly charged environment like a border, the idea that you can analyse facial gestures from a single camera to determine the likelihood that someone is lying is deeply controversial.”

How we display emotions is highly individual, depending on factors like age, gender and where we’re from. The ability for a system to accurately detect emotions across entire populations is dependent on training data reflecting the diversity of people within that population.

“Bias can emerge in the process of hand-coding emotion expressions, which then feeds into the algorithms used to automatically categorise expressions,” says McStay. “This includes who’s expressions are being hand-coded, but also the people who are hand-coding that data, so both the analysed and the analysis.”

iBorderCtrl and several members of their staff were contacted for this article, but all failed to respond, and we are yet to know the diversity within their team, or any steps taken to counter this internal bias.

However, they did release the make-up of their training data which encompassed 32 individuals, of which 10 were Asian/Arabic and 22 were White Europeans – excluding a large proportion of the world and uses a sample size far below what would typically be used. There is therefore a risk that the system may not work accurately and may work differently depending on where you’re from.

Questions surrounding the accuracy of these algorithms are, at current, largely left up to speculation, because despite these EU-funded projects having concluded, the public has not been given access to the ethics reports, legal assessment or the pilot results on the basis that it undermines the profitability of the company.

Patrick Breyer, MEP for the German Pirate Party, is currently challenging this decision in the courts and suing the EU over their decision not to release the documents. He says:

“This is a matter of democracy. This is a project that is funded by taxpayer money yet we are not able to access any of the data.

“Even if this technology isn’t implemented in the EU there’s a risk that the company could sell this technology on to authoritarian regimes or the private sector.”

Lina Dencik, Head of the Data Justice Lab and author of a new report on iBorderCtrl, says that we need to focus on why these technologies are being developed in the first place. “To take the performance of these systems as being the key issue is a distraction,” she says, “rather we need to look at why the development of new technical systems are being advanced as the solution to our social and political problems.”

It is within Europe’s highly securitised political atmosphere that the development of iBorderCtrl emerges, offering up new forms of technological innovation promising to solve the complex and intractable socio-political issues facing the European project. The result, however, is the extension of largely untested technologies into highly contentious and sensitive areas, such as border control.

Robbie Warin is a journalist writing about technology and social justice.

Build
 people-
  powered
   media.

Build people-powered media.

We’re up against huge power and influence. Our supporters keep us entirely free to access. We don’t have any ad partnerships or sponsored content.

Donate one hour’s wage per month—or whatever you can afford—today.

We’re up against huge power and influence. Our supporters keep us entirely free to access. We don’t have any ad partnerships or sponsored content.

Donate one hour’s wage per month—or whatever you can afford—today.