Seattle startup uses clinical expertise to make AI models safer and reduce dangerous responses
AI Mental Health Safety Startup mpathic Raises $15M to Prevent Dangerous Chatbot Advice
Seattle-based mpathic is deploying clinical experts to make AI chatbots safer for vulnerable users, especially children and those in crisis.
As AI chatbots increasingly serve as first-line “counselors” and confidants for millions—including a substantial number of young people—Seattle-based startup mpathic is stepping into a critical role: ensuring these digital agents don’t provide dangerous or harmful advice when it matters most.
The company, founded in 2021 with the mission of bringing more empathy to corporate communication, announced on Monday that it’s significantly expanding its operations to support foundational model developers and teams building LLM-powered applications. This strategic pivot comes as AI becomes an increasingly common interface for mental health and medical support.
“We are essentially producing evaluation sets or training datasets to make models safer for vulnerable users—like children, people with mental health challenges, or those experiencing crisis situations,” explained Grin Lord, mpathic’s co-founder and CEO, who brings a unique combination of credentials as a board-certified psychologist and natural language processing researcher.
From Corporate Communication to Clinical Safety
mpathic’s approach draws on years of experience in clinical trials and hospital settings, helping AI teams stress-test model behavior before deployment. Their comprehensive methodology includes evaluating responses, monitoring live interactions, and implementing safeguards that can flag concerning content, redirect conversations, or intervene when necessary.
Lord likens their work to synthetic data generation for visual AI systems. “It’s not every day that a child is going to run in front of a Waymo vehicle, but we can simulate that scenario 10,000 different ways using synthetic data,” she explained. “That’s essentially what we’re doing, but from a psychological perspective with language.”
In one early engagement, mpathic’s clinician-led program helped a major model builder reduce undesired or potentially dangerous responses by more than 70%—a significant improvement in safety metrics that could have real-world implications for user wellbeing.
Massive Growth Fueled by Foundry VC Investment
To accelerate its expansion, mpathic has raised an additional $15 million in 2025, with Foundry VC leading the investment round. The company reports experiencing 5X quarter-over-quarter growth at the end of last year following its pivot toward foundational model safety.
While mpathic initially focused on building software to analyze corporate conversations across texts, emails, and audio calls, the startup has been developing specialized models for high-risk clinical situations since 2021. Today, their “human-in-the-loop” infrastructure has scaled to include a global network of thousands of licensed clinical experts, with hundreds more being onboarded weekly to meet surging demand.
“It’s a much different company than it was even a few quarters ago,” Lord noted, highlighting the rapid evolution of their business model and market positioning.
A “Techno Optimist” with Realistic Concerns
Lord, who was a finalist for Startup CEO of the Year at the 2023 GeekWire Awards, describes herself as both a “techno optimist” and realist regarding AI’s role in mental health. She expresses what she calls “radical acceptance” of the technology’s usefulness while maintaining clear-eyed awareness of its risks.
“It doesn’t surprise me at all that if there’s something available 24/7 that acts like a therapist, people will talk to it and use it,” she said. “And that could be better than nothing.” However, she emphasizes that the potential for positive impact is only achievable through careful development and monitoring. “I think we can train both humans and AI to listen accurately and well and not create harm.”
Major Partnerships and Rapid Scaling
While mpathic declined to name specific companies or models they’re working with, they confirmed partnerships with leading foundational AI model developers serving tens of millions of users. The startup also maintains clinical partnerships with organizations including Panasonic WELL, Seattle Children’s Hospital, and Transcend.
The company, which currently employs roughly 34 people, is “hiring like wildfire” according to Lord. Their leadership team has expanded with the addition of Rebekah Bastian (formerly of Zillow, OwnTrail, and Glowforge) as chief marketing officer, and Alison Cerezo (an American Psychological Association AI advisory member) as chief science officer.
As AI continues to penetrate mental health services and crisis intervention spaces, mpathic’s work represents a crucial bridge between technological innovation and clinical safety—ensuring that as these tools become more accessible, they also become more responsible.
Tags: #AI #MentalHealth #Chatbots #Startup #Funding #SeattleTech #ArtificialIntelligence #ClinicalSafety #TechNews #VentureCapital #FoundryVC #AIethics #MentalHealthTech #SiliconValley #Technology #Innovation #HealthcareAI #DigitalHealth #MachineLearning #TechCrunch #GeekWire #Psychology #CrisisIntervention #ChildSafety #AIresponsibility #TechStartups #EmergingTech #DigitalTransformation #HealthTech
Viral Phrases: “AI counselors for millions of young people”, “70% reduction in dangerous responses”, “human-in-the-loop clinical experts”, “synthetic psychological data”, “techno optimist and realist”, “radical acceptance of AI”, “better than nothing”, “hiring like wildfire”, “stress-test model behavior”, “clinical trials and hospital settings”, “board-certified psychologist and NLP researcher”, “tens of millions of users”, “flags, redirects, or intervenes”, “5X quarter-over-quarter growth”, “licensed clinical experts network”, “empathy to corporate communication”, “first-line counselors and confidants”, “digital agents”, “dangerous or harmful advice”, “Foundry VC investment”, “Seattle-based startup”, “AI becomes an interface”, “vulnerable users”, “mental health challenges”, “crisis situations”, “stress-test model behavior”, “evaluate responses”, “monitor live interactions”, “safeguards implementation”, “clinical partnerships”, “rapid evolution”, “clear-eyed awareness”, “careful development”, “responsible innovation”, “technological innovation”, “clinical safety”, “bridge between technology and safety”, “accessible and responsible tools”
,




Leave a Reply
Want to join the discussion?Feel free to contribute!