The BBC’s Investigation Into Online Abuse Is Total Bullshit

It's either propaganda, or it's very stupid.

by Ash Sarkar

9 November 2022

REUTERS/Hannah McKay
REUTERS/Hannah McKay

This morning, the BBC published a piece of research into abusive tweets levelled at MPs, and the results were shocking. Using an algorithm developed by Perspective API, the BBC combed 3 million tweets sent to MPs over a six week period, and found that 130,000 of them (roughly 5%) could be classed as “toxic” – and former Prime Minister Boris Johnson was the single largest recipient of toxic comments.

Labour MP Jess Phillips (of telling Diane Abbott to “fuck off” fame) identified “the issue over women’s rights butting up against the trans debate” as being a particular flashpoint for unpleasant tweets; meanwhile Tory backbencher Ben Bradley (who was forced to pay damages to Jeremy Corbyn after falsely stating he sold information to communist spies) warned that online abuse risked discouraging potential candidates from pursuing a career in politics. The report was covered on BBC News, and received widespread comment from both Labour and Conservative politicians.

That’s not the surprising bit, however. Despite having been an eight month long “labour of love” according to Pete Sherlock, editorial lead of the BBC Shared Data Unit, it took less than an hour for its methodology to be torn apart by Twitter users.

The first warning sign was how the research defined a toxic tweet. It didn’t have to be violent, threatening, or contain slurs. Instead, Perspective API defines a toxic comment as one that’s “rude, disrespectful or unreasonable” and “likely to make someone leave a conversation”. Words like “hypocrite”, “liar”, “disgrace”, and “Tory” were identified as particularly common terms found in so-called toxic tweets. Many of these might be considered unnecessarily aggy or disparaging, but crucially, they can also be true. Politicians might feel that it’s unfair to be labelled a disgrace or something they’ve said or done, but that doesn’t mean it’s abusive.

The comically-low threshold for identifying toxic content is matched only by the farcical standards used by the AI to judge disparaging content. Transphobic slurs had lower toxicity ratings than a comment reading, “You’re a homophobe and a transphobe”. The AI appears incapable of distinguishing between swearing used in an abusive context, and swearing used in an innocuous one.

What’s more, it proves utterly useless when it comes to detecting racist content. I spent the afternoon playing slur bingo, to see what would and would not be picked up as a toxic tweet. I entered a selection of racist tweets I’d received in the past year into Perspective’s API, of varying lengths and sentence complexity. Just to make it easy for the machine, I deliberately chose one which included a common racial slur against South Asians.

None of these were registered as potentially toxic at all by the AI – but, “You’re a fucking G”, a compliment, popped up with a 90.29% likelihood of being toxic. So if I replied to my constituency MP (and fellow Spurs supporter) David Lammy with, “Fucking hell what a goal” after a Heung-Min Son masterclass, the AI would judge this as having 78.57% likelihood of being toxic. But if I were to say, “Go back to Africa”, the AI wouldn’t flag it as being potentially toxic at all.

Which is a problem, because the BBC’s report asserts that, “Across the board, MPs who self-define as being from an ethnic minority background were not more likely to receive a tweet above the 0.7 toxicity threshold.”

Nine out of the top ten MPs identified by the BBC as having received the highest percentage of toxic tweets are white; meanwhile Diane Abbott, who was the target of nearly half of all the abusive tweets sent to female MPs in the run-up to the 2017 general election, is nowhere to be found.

But if the BBC’s AI isn’t picking up racialised abuse, or categorising it as highly toxic, 90% of the most targeted MPs being white is not a conclusion that’s supported by reliable methodology. It’s like saying that women aren’t more likely than men to be killed by their partners, but only counting murders take place outside the home: you’re excluding the relevant data, and then proclaiming case closed.

The methodology used to put together the BBC’s report appears to be so deeply flawed – and indeed, the BBC so unaware of the AI’s own limitations – that its findings can’t be trusted at all, other than perhaps as a highly caveated basis for further research. It certainly shouldn’t have been published in its present state, nor relied upon by politicians and journalists alike to tell us something about how social media nastiness impacts our democracy. Online abuse is, of course, real. But this report is what academic peer reviewers would call “a load of horseshit”.

What conclusions can be drawn about a piece of research which is so badly designed, but so widely publicised by the UK’s most prestigious and popular source of news? What might we surmise about an AI-led study which flagged words like “Tory”, “hypocrite”, and “liar” as toxic, but not “Paki”, “wog”, or “Go back”?

We might infer that the methodology was stacked to produce certain results. The bar was set too low for identifying tweets which might upset centre and right wing MPs, and classifying them as abusive. And, whether intentionally or by accident, it excluded and minimised cut-and-dry instances of abuse which target ethnic minorities and LGBT people, while flagging tweets which identified racism, transphobia, and homophobia as toxic. The suggestion is that pointed criticism of the powerful people is abuse, whereas abuse from the right is simply non-existent. Perhaps we might conclude that this wasn’t a piece of research, so much as it was a piece of propaganda draped in the garb of data.

The other explanation is that everyone involved in putting together this piece of research, and helping it enter the public sphere, was unforgivably stupid.

I’m not sure which is more likely, or indeed, which is more reassuring.

Ash Sarkar is a contributing editor at Novara Media.

We’re up against huge power and influence. Our supporters keep us entirely free to access. We don’t have any ad partnerships or sponsored content.

Donate one hour’s wage per month—or whatever you can afford—today.

We’re up against huge power and influence. Our supporters keep us entirely free to access. We don’t have any ad partnerships or sponsored content.

Donate one hour’s wage per month—or whatever you can afford—today.