Meet the Mental Health Podcaster Sounding the Alarm on AI
‘Our mental states have become a risk score.’
by Harriet Williamson
22 May 2025

“AI tools are already shaping decisions about who gets hired, who gets housing and even who gets flagged in the legal system. Most people don’t realise how deep it goes right now in the United States,” says Mollie Adler.
Adler is speaking to Novara Media from her home in Texas, US. The 35-year-old writer, researcher and thinker uses her podcast Back From the Borderline to challenge dominant Western ideas about mental illness. It reframes stigmatised mental health symptoms as potentially important communications about our current circumstances, personal histories and deeper purposes. Adler promotes the human capacity to transform – a process of “emotional alchemy” – over labels and rigid diagnostic categories. But it’s not your typical mental health podcast, covering ground from the media’s treatment of Britney Spears and the history of psychiatry, to religious trauma and big existential questions.
With Back From the Borderline ranking in the top 1% of downloaded podcasts globally and 60,000 downloads per month across all platforms, it’s clear her ASMR-worthy deep-dives are resonating.
In early March, Adler sounded the alarm on how, in her words, mental health labels are becoming “algorithmic life sentences”. In a Substack post and subsequent podcast episode, she painted a dystopian picture of US citizens being rejected from jobs, denied insurance coverage and even losing access to their children due to discriminatory algorithmic decision-making that is silent, unaccountable and nearly impossible to challenge.
In the US, mental health-related data harvested from wellness apps, therapy platforms and journaling tools, alongside search histories and social media activity, is being scraped and analysed by AI screening tools and used to build ‘risk profiles’. Research from Duke University found that US data brokers were selling information that identified people by their mental health diagnoses – data that had been freely handed over to health and wellbeing apps, including names, addresses, emails and ethnicities.
In 2022, one in four US companies were using automation or AI in recruitment and hiring processes, according to research from the Society for Human Resource Management. Due to president Donald Trump’s second term AI deregulation efforts, this figure is likely now much higher.
The use of AI screening tools has already triggered multiple federal cases, including one currently pending in California. Plaintiff Derek Mobley alleges that companies using AI screening tools made by the AI platform Workday rejected his applications for over 100 jobs because he is Black, over 40 and has experienced anxiety and depression.
“Different hiring platforms are sorting and scanning the digital presences of people who apply for roles to assess for emotional stability,” Adler says. “Landlords in the housing sector can now use certain tools that harness AI to scan for behavioral volatility as a factor in tenant scoring. Notice how vague these terms are – emotional stability, behavioural volatility, mental health signals, stress levels – what do they even mean? It’s all very subjective.”
There is no single comprehensive federal law in the US that protects personal data. Instead, a scattered patchwork of different state-level protections mean regulation and individual rights differ across the country. Even California’s Privacy Rights Act – which is more stringent than what many other states have – is compared unfavourably to the UK’s GDPR legislation. California’s attorney general Rob Bonta issued two legal advisories earlier this year – one explicitly warning healthcare entities who develop, sell and use AI and other automated decision-making tools. But this is only one state and companies are increasingly monetising psychological data without real oversight.
“When a user is treated as a product and their mental state becomes a risk score, the implications are chilling,” Adler says. Before turning to podcasting full-time, she worked in tech – specifically in the software as a service (SaaS) space – for roughly ten years. “Buried in all those terms and conditions is permission to scrape, sell or use that data for what they call research and product development. So once it’s in the system, it’s out of your hands.
“Sometimes that phrase ‘research and product development’ is actually legit,” Adler continues. “The problem is when it’s used as a smokescreen, when it becomes this blanket permission for companies to do whatever they want without ever telling you how your data is actually being handled.
“If you’re going to ask people for access to their inner world, there should be total and pristine clarity about what that data is being used for, where it’s going and who’s benefiting from it. Anything less than that isn’t innovation, it’s just exploitation dressed up all pretty in tech bro, UX [user experience] language.”
The speed of innovation and the quiet way tools are introduced – without fanfare or announcement – is part of the problem. Adler was living in the UK and working at e-commerce marketplace Groupon when our GDPR legislation was on the horizon. She witnessed internal corporate panic and describes a “shitshow cacophony scramble behind the scenes” where senior staff were “falling over themselves to get compliant”.
“Everything we did was about GDPR, but even then, it was obvious that the rules weren’t designed to protect people for what was coming next and honestly, how could they? Innovation is always outpacing things because even when companies play by the book, the book itself is outdated. AI is moving faster than any regulator can keep up with.
“Entire industries are being built right now around collecting, sorting and monetising our inner worlds. With mental health data, the stakes are even higher because people are sharing what hurts them the most, what they’re afraid of and what they’re trying to heal from. But we continue to accept those terms and conditions because we’re all just trying to get through the day – and tech companies are counting on that. People who don’t have the resources to opt out or push back or research into this are the ones who will be harmed first. But I think it’s going to impact all of us.”
I ask Adler if the outlook for people in the UK is any brighter. She replies, “I know your regulatory landscape offers a better starting point. That said, I also know how quickly technology tends to move past legislation. It’s what we saw with social media. And it’s starting to happen again here [in the US].”
But the core issue for Adler is that AI tools like large language models are being trained on the biomedical model of mental health, when there’s still no biological basis for the vast majority of psychiatric diagnoses – with notable exceptions like Alzheimer’s and Huntington’s disease. The diagnoses listed in the Diagnostic Statistical Manual (DSM) – known as the North American ‘bible of psychiatry’ – and the International Classification of Diseases (ICD) are defined and formalised by committee voting and, as such, are shaped by social factors like politics, culture, personalities and pharmaceutical lobbying, as opposed to hard science.
“Psychiatry is not like getting your blood drawn, for example and finding out you have low iron,” Adler says. “I could get my blood work done anywhere in the US, anywhere in the UK, and my levels would be the same. But for these DSM diagnoses, there’s no test, no biomarker, no scan that would give you the same result across different providers. It’s just symptoms on a checklist and even then, it’s subjective.
“Every single time a new version of the DSM comes out, we get more diagnoses, more codes, more billable treatments, more people walking around convinced that something is fundamentally wrong with their personalities or their brain chemistry,” she continues. “And that’s not care – it’s commerce. It’s really fucking good marketing. Look at how many of us believe it. And this is the foundation that’s being fed into AI.
“These tools could have been designed to help people reflect, grow or engage with their inner world in new ways, but they’re being trained on frameworks that actively flatten complexity, categorise our pain and sell solutions. It’s not innovation – it’s just automation of the same old stuff.”
Adler also wants to challenge those who see themselves as on the left but haven’t really examined the neoliberal and capitalist engine driving dominant understandings of mental health. “I see people actively embracing these labels who are deeply progressive,” she says. “They question capitalism, they fight state control, they’re trying to build a more liberated world, yet they’re putting DSM and ICD codes into their social media bios. I say that with so much compassion, because I’ve been there too. I’ve identified with labels that helped me make sense of my pain. They gave me a map – and I think that’s their utility – but we have to eventually ask: who wrote the map and where does it lead?
“These frameworks were giving me a narrative that said my distress was chemical and the best I could hope for was stability, compliance and basically just like, ‘get back to work’. And if you look critically, you can see this logic baked into the popular treatments most often covered by health insurance like CBT [cognitive behavioural therapy] or DBT [dialectical behavioural therapy].”
Adler qualified that she’s not saying these treatment types can’t be helpful – but that they’re only just enough to get someone out of crisis and back into the workplace, “not enough to heal, get to the root cause of suffering or transform your life”. She describes CBT and DBT as “management for the sake of productivity” – a view shared by British psychotherapist James Davies, who has explored how short-term modes of therapy that promise the greatest return on worker productivity such as CBT are marketed to Westminster politicians and adopted by the NHS.
Many of Adler’s critiques of psychiatry and the medicalisation of human experience are recognisable to those familiar with the work of psychologists like Lucy Johnstone and Mary Boyle, or psychiatrists Sami Timimi and Joanna Moncrieff. If the accepted Western biomedical model of mental health is unhelpful – or harmful – then it really matters that AI tools are being trained to predict early signs of a “mental health crisis” – with one study finding this is possible with 89.3% accuracy based entirely on online behavior.
“You’re not being asked to wake up – you’re actually being asked to fall more deeply asleep and just behave and not be a problem, which is why it’s so disorienting to see people who would have been actively protesting psychiatry in the 70s now embracing it like it’s some kind of gospel truth,” Adler says. “Queerness used to be pathologised. Being gay was literally categorised as a mental disorder in the DSM until the 1970s and now we’re actively taking part in pathologising our own grief, our rage, our trauma and we’re calling it identity. And to me, it feels like we’re just drinking the same poison in a new bottle, and then like psy-opping ourselves into believing it’s the cure.
“It replaces self-inquiry with symptom checklists. It tells you what you are and who you are, instead of helping you figure that out for yourself and to me, that’s horrifying because this whole dystopian framework is being embedded into machine logic. Tech founders and psychiatrists are falling over themselves to build therapy bots and mental health tools, using this model as the blueprint. It’s not progress, it’s regression.”
So what does Adler see as the way forward? She views AI as a neutral tool – “like a scythe” – with its impact entirely depending upon how it is wielded. Adler is hopeful about what AI could potentially offer – once it’s been “jailbroken” from the biomedical model of mental health.
“All of us deserve better tools, not DSM-trained chat bots, not these sleep wellness apps that basically sell your breakdown to the highest bidder. I know this emergent intelligence can be used for inner work that’s actually liberating – but only if we build it from the ground up with new framework, better values and a really uncompromising approach to data protection and psychological safety. That’s how we make sure we don’t repeat the past, and how we actually build stuff that’s worthy of all of the people who are going to be using it.”
Harriet Williamson is a journalist and former editor at Pink News and the Independent.