Effective Altruism Is Infiltrating UK Politics and It’s Time We Woke Up
The last six months have been a rollercoaster for effective altruism (EA). Back in September, they had a book on the bestseller lists, glowing profiles across the anglophone press and were spending millions of dollars becoming a leading funder of the United States Democratic party. Then, one of the movement’s biggest funders and most famous advocates, the crypto billionaire Sam Bankman-Fried, suddenly lost everything and things started to unravel.
On 8 November, it was revealed that most of the money in Bankman-Fried’s trading firm, Alameda, wasn’t real money at all, but rather “tokens” printed by his other company, FTX. By early December, he’d been arrested in the Bahamas and extradited to New York to face charges for fraud and money laundering, with ongoing investigations into campaign finance violations. Bankman-Fried subsequently admitted to Vox that “the ethics stuff” he promoted alongside his work in the finance industry was “mostly a front”; Time magazine also reported that several key figures in EA, including the movement’s co-founders and Oxford philosopher William MacAskill, had been repeatedly warned of Bankman-Fried’s “unethical, duplicitous, and negligent” behaviour, though they were not warned of specific criminal activity.
For a movement which defines itself by the relentless commitment to doing the most good possible, this was potentially catastrophic. But it’s too early for obituaries. Over the past five years, EA has become deeply rooted in politics on both sides of the Atlantic and, as resurgent centrists cast around for ideas in the midst of endless crises, EA’s influence is only likely to grow.
Longtermism.
Emerging in the early 2010s, EA combines a kind of data-driven utilitarianism – using evidence to do the greatest good for the greatest number of people – with the idea that we have a moral duty not only to people living thousands of miles away but also to people living thousands of years into the future. At first, this ethos led EA to call for people in the global north to hand over a substantial portion of their wealth – 10% for most (full disclosure, I’ve been doing this since 2014), though others donate everything they earn over £17,000 a year – to global health initiatives, or as no-questions-asked cash to people living in developing countries.
This bold proposition quickly took off, with the funds donated by the movement growing by more than 35% a year. It also brought MacAskill and fellow Oxford academic and EA cofounder Toby Ord to the attention of major funders: people like Holden Karnofsky, the cofounder of charity evaluator GiveWell, and Dustin Moskovitz, a Facebook cofounder.
But in the last five years or so, EA has turned away from the billions of people currently alive and towards the trillions who might exist in the future. Longtermism, as this philosophy is known, has meant shifting from malaria nets and on to the existential risks to the survival of our species: nuclear war, pandemics, takeover by super-intelligent AI, meteors and more. And this newfound responsibility for the future of humanity has led them to politics. At first, this involved promoting criminal justice reform and jobs-first economic policies. But EAs soon started funding election campaigns and even running for office. Last year in Oregon, Carrick Flynn, an effective altruist with no political experience, spent $12m on the most expensive House primary ever on a platform almost entirely devoted to pandemic prevention. He lost by 13,000 votes to Andrea Salinas, a well-known local politician.
Flynn’s campaign was a perfect synecdoche for the EA movement: the Silicon Valley millions, the endless optimism, the narrow focus on discrete issues, and the gut feelings dressed up as mathematics (during the race, EAs guessed that every $1m in donations “would free up over 250 hours of [campaign] work […] which would increase the chances of winning the election by more than 2%”).
Yet while most of EA’s energy has focussed on the Democratic party, EA is quietly laying down roots in the UK, too.
A new frontier.
Last year, shadow health secretary Wes Streeting was given £30,000 for a policy advisor by a brand-new organisation called Labour for the Long Term (LLT). Despite the rather incriminating name, LLT told me that they “do not campaign for or align ourselves with ‘longtermism’[…] nor the movement and set of organisations known as ‘effective altruism’”. Instead, the organisation quoted Keir Starmer’s recent article in the New Statesman in which he wrote “there is no greater cause for our generation, than to make our world safer for the next”. LLT clearly has far more money than equivalent groups like Labour for a Green New Deal could ever dream of and , when asked who funds the organisation, LLT would only say that “donors are all Labour Party supporters who are donating to LLT in a personal capacity”.
Though LLT might be secretive about its donors, the APPG for Future Generations, chaired by the Labour MP Bambos Charalambous and one of the twenty richest APPGs, is not: the APPG has received £242,000 in donations from prominent EA enthusiasts including Moskovitz and Skype cofounder Jaan Tallinn.
As well as increasing its influence in parliament, EA has made major inroads into Britain’s elite academic institutions; there are now EA chapters in virtually all Russell Group universities. Bolstering these are a cluster of EA-affiliated research institutions, including Cambridge University’s Centre for the Study of Existential Risk, which Tallinn founded in 2012 with funds from Elon Musk and Moskovitz, and Oxford’s Future of Humanities Institute, which recently received $13m from Moskovitz.
A source in the Cabinet Office told me that these research centres have built a “revolving door” between themselves and the civil service, visible in organisations like Impactful Government Careers – a mentoring service run by a civil servant on a career break funded by the Centre for Effective Altruism, which has incubated a number of informal networks of civil servants interested in longtermism – and the Centre for Long Term Resilience, a thinktank set up by two former civil servants which has received over £2m a year in funding, a substantial proportion of it from Tallinn and American pop psychologist Sam Harris. (This is only marginally lower than leftwing think tank IPPR,which raised £2.7 million in 2021.)
No surprise, then, that EA’s ideas have percolated upwards to our political leaders: former Number 10 advisor Dominic Cummings described himself as an “advocate” of EA in 2016; there are rumours that Boris Johnson read a key EA text while prime minister; while a number of Labour front benchers have publicly endorsed LLT.
Yet is EA’s growing influence necessarily a bad thing?
Critical friendship?
Most critiques of this new philosophy have focused on either abstract academic questions or on the extreme fringes of the movement and its links to race science, eugenics and Nazism. But none of this gets at the core of EA, which on the face of it appears much more anodyne. Around three-quarters of EA adherents consider themselves on the left or centre-left, unsurprising for a movement that’s disproportionately young and university educated. Most of the money also goes to broadly centre-left causes. Moskovitz’s Open Philanthropy – by far the largest and most established funding body in EA – has directed nearly half of its $2.3bn in grants to global health and development initiatives, while the next largest sums were the $280m spent on AI ethics and $210m on ending mass incarceration.
These are all problems worth addressing. But what’s troubling is that EA seems to be consistently drawn to neoliberal policy solutions.
One example of this is the housing crisis, a priority issue for EA. LLT lists “street votes” – whereby streets would be given powers to vote on local development initiatives – as one of its six key campaigns. This policy is associated with an organisation called London YIMBY (yes in my backyard), founded by former hedge fund analyst John Myers and acclaimed by EAs as the most “effective” organisation working on the housing crisis.
London YIMBY’s analysis is simple: we haven’t built enough homes, and decentralisation and deregulation will help us to build more. This approach has often led them to side with developers: over the last few years, they have campaigned alongside Priced Out UK, a campaigning organisation that argues that post-Grenfell safety requirements are too burdensome for developers and which has been associated with fake community campaigns backing unpopular regeneration schemes. But most importantly, London YIMBY’s analysis seems to be basically wrong: the evidence suggests that the housing crisis has not been caused by a shortage of general construction but of low-cost social housing.
So, could we persuade EA of the merits of old-fashioned council housing and other leftwing ideas?
Some think so. Aveek Bhattacharya, research director at the Social Market Foundation and one of the most thoughtful commentators on EA, told me that there would be a willing audience for leftwing ideas in EA circles; indeed, many Labour leftists who dabble with EA have suggested that the only reason effective altruists have found themselves in bed with neoliberals is that they’re the only ones who’ve bothered to take the movement seriously.
Still, there is reason to be sceptical. One of the reasons why EA has appealed to so many Silicon Valley billionaires is that it flatters their self-image as rational engineers who can solve the world’s problems through technocratic means. This doesn’t mean that the left should reject EA. But neither should we necessarily support its methods. As Friedrich Engels famously said of the English social reformer Jeremy Bentham: “Though he has a school within the radical bourgeoisie, it is only the proletariat and the socialists who have succeeded in carrying his teachings a step forward.”
Correction 20/04/2023: A previous version of this article claimed that several of Bankman-Fried’s fellow leaders at EA were aware of his fraud for year. In fact, Time reported that they were made aware of his unethical, duplicitous and negligent behaviour, but not of his fraud specifically. That version also said Bankman-Fried had told Vox that EA was simply a cover for his fraud. In fact, Bankman-Fried admitted to a Vox journalist that “the ethics stuff” – presumably including EA – was “mostly a front”. The article has been corrected accordingly.
Matteo Tiratelli teaches sociology at University College London.