Last month, TikTok’s founder Zhang Yiming was confirmed as China’s richest person, with a fortune of almost £38 billion.
But as the 41-year-old found himself atop the rich list for the first time, things weren’t looking so great for employees of the video platform in London. Halfway across the world, the people who work to scrub the site clean of suicides, child pornography, and other toxic and graphic posts were facing redundancy.
In October, TikTok announced plans to axe 106 jobs in its UK content moderation and safety team, based in a WeWork in Paddington, as part of a global wave of layoffs, hitting workers in Germany, the Netherlands, Ireland, and Malaysia. The platform has refused to say whether further cuts to the team are imminent.
Among those facing redundancy are single mothers, women on maternity leave, and workers whose visas are tied to their employment status.
They are now being made redundant by an employer whose attitude seems at best avoidant and at worst callous, according to several workers Novara Media spoke to. The workers asked to remain anonymous for fear of repercussions during the redundancy process.
The company’s approach to staff disquiet has been to “ignore it and wait for things to pass”, another worker told Novara Media.
“The company’s communication approach is non-existent, and just very detached from everything.”
“They keep sending us [emails about] stretching in your chair workshops, and wellness things that nobody really has time to do.”
One employee facing redundancy said that TikTok’s response to workers having to leave the country in response to losing their visa was: “We didn’t ask anyone to leave their country or come here.”
According to Tiktok, the company has had an extensive dialogue with its employees and the union. However, workers say their attempts to negotiate over the terms of the redundancy have been frustrating.
One moderator with knowledge of the consultation said that it had been like “negotiating with robots” and that there was “no rationale behind this other than going to a cheaper market to pay less and save money.”
“It’s like an order came from China, and management is just applying the order.”
Another worker facing redundancy said: “TikTok talk about how they want to invest in people, to grow and build the empire of TikTok together. But reality has shown us that we are not part of their project.”
TikTok’s London staff appear to be getting a particularly raw deal. Workers who only earned £24,000 – not much in London – are now being offered a below industry standard redundancy package, amid rumours that colleagues in Amsterdam are being offered more favourable terms.
Tiktok tops up its salaries with sizeable performance based bonuses to incentivise its workforce. The company is refusing to include bonuses as part of its redundancy packages – meaning workers are losing out on pay for work they have already done.
The timing is rubbing salt in the wounds, too. One worker said: “This is happening just before Christmas, when everyone was getting ready to buy presents and spend some time with their families. Some workers are immigrants who spend the holidays in other countries. That can’t happen now.”
Workers also fear for their future job prospects. “106 people are going to be released on the same day to the job market, with the same skills, just before Christmas,” said one worker.
AI don’t believe it
As the UK moderators fight for fair redundancy terms, they are also sounding the alarm about the dangers of letting one of the world’s largest social media companies put content moderation in the hands of AI.
Dave Ward, general secretary of the Communication Workers union (CWU), has called on Ofcom, which regulates social media companies, to investigate TikTok, labelling the video platform’s plans to axe its UK content moderation team a “serious threat to the safety of online users.”
Ward said: “Many of the platforms are already totally out of control and breeding grounds for bullying and spreading misinformation. For 106 workers to face redundancy on poor terms this close to Christmas whilst the founder of TikTok sits on fortunes is an absolute disgrace.”
Workers have expressed fears that the redundancies will compound the existing problems with TikTok’s moderation capacities, at the exact moment the platform is set to face greater regulatory pressure from Ofcom, following the rollout of the Online Safety Act early next year.
The act gives Ofcom stronger powers to hold social media platforms to account, while making tech companies responsible for protecting its users from harmful content. Ofcom fined TikTok £1.875 million in July, after the company failed to respond to a formal request for information about its parental control safety feature.
Scrutiny of the short-video platform is mounting globally, following increasing concerns about its algorithm, and exposure of young children to harmful content on the app. In October, the UK communications regulator concluded that there was a “clear connection” between the violent disorder that broke out in the aftermath of the attack in Southport and online activity.
Workers pointed out that under Elon Musk’s ownership, X/Twitter’s content moderation teams had been gutted across the world. Speaking on the day of Trump’s election, one worker said “and now we see what’s happening with the news.”
Old-school problems
But while fears about AI moderation mount, workers also believe that talk of new, unproven technology is something of a ruse, offering a futuristic sheen to an old-school case of finding cheaper labour.
UTAW says that the technology cannot yet be relied upon to moderate images and speech. John Chadfield, technology officer at the CWU, said: “TikTok have said they are investing heavily into automating content moderation, but the abilities of AI to detect the most harmful of content is still unproven, and will never match the capabilities of skilled workers.”
AI moderation can at times be too severe. In one example, AI moderation flagged Sean Kingston’s song “Beautiful Girls” as “self-harm”, because the playful, upbeat song has the lyric “you’ll have me suicidal”. It can also be lax; AI struggles with age verification, flagging young children wearing “beauty” filters as adults.
The upshot is that UK based, in-house moderators are likely to be replaced by contractors in the Global South, on worse pay and in more insecure conditions. The job losses were confirmed to staff via an email, which said that they would allow the company to “further leverage advanced technology for greater accuracy, consistency, and scalability.” But the email also said that the cuts would allow it to “enhance our collaboration with our third-party partners” – likely meaning cheap outsourced contractors.
One worker facing redundancy told Novara Media: “AI technology is not ready to take over. TikTok says it is eliminating human beings, but really it wants to go to a cheaper market with cheaper people.”
Conditions for outsourced content moderators, who receive none of the benefits that employees of social media firms like TikTok and Facebook, are notoriously poor.
In 2022 investigation into TikTok contractor Teleperformance, a multinational outsourcing giant, found that its Colombian employees faced gruelling performance targets, low pay and frequent salary deductions, and widespread psychological trauma as a result of exposure to child sexual abuse and cannibalism.
In Kenya, a contractor has brought a legal case against both TikTok and outsourcing company Majorel. A moderator named Mojez alleged that he was unfairly dismissed after he began advocating for better working conditions, and that he developed post-traumatic stress disorder in response to his job.
Co-executive director of Foxglove, a legal nonprofit organisation supporting tech workers, Martha Dark said: “What this move from TikTok really means is they are likely to follow the failed Meta model of outsourcing this work to workers abroad, who historically, are paid peanuts and forced to work in brutal, dangerous conditions.”
As they fight to lose their jobs on more favourable terms, Tiktok’s UK employees are sceptical that their job will be done as well by outsourced workers. One said: “TikTok has assured us that [the outsourced workers] will continue to do everything that has been done before. We asked them to show us the metrics, to show us that these companies can actually do as good a job as the in-house moderation teams, and they didn’t.”
And as for AI? It’s “absolutely incapable of spotting hate speech, dog whistles, or coded language”, another worker said. “It’s absolutely dog shit at that. It definitely needs a human touch.”
A TikTok spokesperson said: “These claims are highly inaccurate and misleading. We will continue to have a substantial workforce in the UK, and we’re proposing to make these changes as part of our ongoing efforts to further strengthen our global operating model for content moderation. We expect to invest $2bn globally in trust and safety in 2024 alone and are continuing to improve the efficacy of our efforts, with 80% of violative content now removed by automated technologies.”
Polly Smythe is Novara Media’s labour movement correspondent.