Co-founders behind Reface and Prisma join hands to improve on-device model inference with Mirai

Co-founders behind Reface and Prisma join hands to improve on-device model inference with Mirai

London’s Mirai Aims to Revolutionize On-Device AI with $10 Million Seed Funding

In an era where artificial intelligence is dominated by massive cloud infrastructures and sprawling data centers, a small but ambitious London-based startup is betting on a different future—one where AI runs seamlessly on your phone or laptop without constant cloud dependency.

Mirai, a 14-person technical team founded in 2024 by Dima Shvets and Alexey Moiseenkov, has just secured $10 million in seed funding led by Uncork Capital. Their mission? To fundamentally change how AI models operate on consumer devices by making them faster, more efficient, and dramatically more accessible to developers.

From Viral Apps to AI Infrastructure

The founding duo brings serious pedigree to the table. Shvets co-founded Reface, the face-swapping app that went viral and attracted investment from Andreessen Horowitz. Moiseenkov previously led Prisma, the AI-powered photo filter app that captivated millions before the current generative AI boom. Both entrepreneurs cut their teeth building consumer applications that pushed the boundaries of what mobile devices could do.

Their journey to Mirai began with a simple realization: while the tech world obsesses over cloud computing, AGI, and massive server farms, the potential of on-device AI remains largely untapped. “When we met together in London, we started to chat about technology, and we realized that within the hype of GenAI and more AI adoption, everybody speaks about cloud, about servers, about AGI coming. But the missing piece is on-device [AI] for consumer hardware,” Shvets told TechCrunch.

This insight came from their experience as consumer app developers who had been thinking about AI and machine learning on devices long before generative AI became mainstream. They saw a critical gap in the market: while cloud-based AI services proliferate, the infrastructure for efficient, powerful on-device AI remains surprisingly primitive.

The Problem They’re Solving

The challenges are multifaceted. Developers building AI-powered apps face skyrocketing cloud costs, latency issues, privacy concerns, and the fundamental limitation that not all users have reliable internet connections. Moreover, as AI becomes more sophisticated, the computational demands increase exponentially, making cloud-only solutions increasingly expensive and impractical.

Mirai’s founders recognized that many developers wanted better cost optimization and improved margins per token usage. They envisioned a world where complex AI tasks could run locally on devices without sacrificing performance or quality—a vision that led directly to Mirai’s creation.

Building the Future of Edge AI

Mirai is developing a comprehensive framework that allows AI models to perform significantly better on devices. At the core of their technology is an inference engine specifically optimized for Apple Silicon, designed to maximize on-device throughput while minimizing resource consumption.

The company claims their engine, built in the performance-oriented Rust programming language, can boost a model’s generation speed by up to 37%. Crucially, they achieve this optimization without modifying model weights, ensuring that output quality remains uncompromised—a critical consideration for developers who can’t afford to sacrifice accuracy for speed.

Their upcoming SDK promises to make integration remarkably simple. “One of the visions why we started the company was that we wanted to give developers, like this Stripe-like, eight lines of code [integration] experience…you basically go to our platform, integrate the key, and start working with summarization, classification, or whatever your use case is,” Shvets explained.

This developer-friendly approach could be transformative. By abstracting away the complexity of on-device optimization, Mirai enables app developers to focus on creating compelling user experiences rather than wrestling with low-level performance tuning.

Current Capabilities and Future Vision

Currently, Mirai’s stack focuses on optimizing text and voice modalities, with vision support planned for the future. The team is already collaborating with frontier model providers to tune their models for edge deployment and engaging with chipmakers to ensure broad hardware compatibility.

Looking ahead, Mirai plans to expand support to Android devices, bringing their optimization technology to the broader mobile ecosystem. They’re also developing on-device benchmarks to help model makers evaluate performance in real-world conditions—a critical tool as the industry shifts toward hybrid cloud-edge architectures.

Perhaps most significantly, Mirai is building an orchestration layer that enables mixed-mode operation. This means that when a task can’t be completed on-device due to complexity or resource constraints, the system automatically routes it to the cloud. This hybrid approach ensures that developers don’t have to choose between pure on-device or pure cloud solutions—they get the best of both worlds.

The Market Opportunity

The timing for Mirai’s technology appears fortuitous. As Andy McLoughlin, managing partner at Uncork Capital, notes, the economics of cloud-based AI inference are becoming increasingly unsustainable. “Given the cost of cloud inference, something has to change… For now, VCs are happy to continue funding the rocket ship companies, spending inordinate sums on cloud inference. But that won’t last—at some point, people will focus on the underlying economics of these businesses and realize that something has to change.”

McLoughlin draws from experience, having invested in an edge machine learning company in the previous decade. While that venture was “early” and eventually sold to Spotify, he believes the current landscape is fundamentally different. The combination of more sophisticated models, improved hardware capabilities, and the urgent need for cost optimization creates a perfect storm for on-device AI solutions.

“Every model maker will want to run part of their inference workloads at the edge, and Mirai feels very well-positioned to capture this demand,” McLoughlin predicts.

Who’s Behind the Investment

Mirai’s seed round attracted an impressive roster of individual investors alongside Uncork Capital. Participants include David Singleton (CEO of Dreamer), Francois Chaubard (YC Partner), Marcin Żukowski (Snowflake co-founder), Mati Staniszewski (ElevenLabs co-founder), Gokul Rajaram (former Google AdSense product manager and Coinbase board member), Scooter Braun (Groq investor), Vijay Krishnan (Turing.com CTO), Ben Parr and Matt Schlicht (Theory Forge Ventures), and Aditya Jami (ex-Netflix technical leader).

This diverse group of backers brings expertise spanning AI infrastructure, consumer applications, and enterprise technology—suggesting broad confidence in Mirai’s approach and market potential.

The Road Ahead

While Mirai isn’t yet working directly with production apps, their technology could power a wide range of on-device AI applications. Potential use cases include on-device assistants that work offline, real-time transcription and translation services, privacy-preserving chat applications, and sophisticated image and video processing tools.

The company’s success could have far-reaching implications for the AI industry. By making on-device AI more practical and accessible, Mirai could help address critical concerns around data privacy, reduce dependency on cloud infrastructure, enable new categories of offline applications, and fundamentally alter the economics of AI deployment.

As the AI landscape continues to evolve, Mirai represents a compelling bet on a future where the most powerful AI doesn’t live in distant data centers but in the devices we carry every day. In a world increasingly concerned with privacy, latency, and cost, that future might arrive sooner than many expect.

Tags

On-device AI, Edge computing, AI infrastructure, Mobile AI, Apple Silicon optimization, Rust programming, AI inference, Cloud cost optimization, Privacy-preserving AI, Consumer AI applications, Generative AI, Machine learning deployment, AI developer tools, Mixed cloud-edge architecture, AI performance optimization

Viral Sentences

The future of AI isn’t in the cloud—it’s in your pocket. This 14-person startup just raised $10M to make that happen. They’re building the Stripe of on-device AI. 37% faster AI without sacrificing quality? That’s the Mirai promise. The economics of cloud AI are broken, and Mirai has the fix. From viral face swap apps to revolutionizing AI infrastructure. Apple Silicon just got a major AI upgrade. The team that brought you Prisma is back with something bigger. Why pay for cloud inference when your phone can do it better? This is how AI becomes truly private and personal. The missing piece in the AI puzzle? On-device intelligence. Rust-powered AI that runs 37% faster on your device. The hybrid future of AI is here, and it’s called Mirai. VCs are finally seeing the light on edge AI economics. Your next AI assistant might not need the internet at all.

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *