Ex-Googlers are building infrastructure to help companies understand their video data

Ex-Googlers are building infrastructure to help companies understand their video data

InfiniMind: The AI Startup Turning Dark Data Into Gold—One Petabyte at a Time

In a world drowning in video content, most of it is collecting digital dust. From decades of broadcast archives to thousands of store cameras and endless hours of production footage, companies are generating more video than ever—but the vast majority of it sits untouched, unanalyzed, and utterly useless. This is what the tech world calls “dark data”: a massive, untapped resource that companies collect automatically but almost never use in any meaningful way.

But two former Googlers are about to change that forever.

Meet Aza Kai (CEO) and Hiraku Yanagita (COO), the dynamic duo behind InfiniMind, a Tokyo-based startup that’s building the infrastructure to convert petabytes of unviewed video and audio into structured, queryable business data. Think of it as AI-powered archaeology for your company’s forgotten footage.

From Google to Global Disruption

Kai and Yanagita spent nearly a decade working together at Google Japan, where they saw the future of video data unfolding before their eyes. Kai’s background spans cloud computing, machine learning, ad systems, and video recommendation models, while Yanagita led brand and data solutions. Together, they realized that the technology had finally matured to solve a problem that had plagued businesses for years.

“We saw this inflection point coming while we were still at Google,” Kai told TechCrunch. “By 2024, the market demand had become clear enough that we felt compelled to build the company ourselves.”

The secret sauce? Vision-language models that evolved dramatically between 2021 and 2023. Earlier AI could label objects in individual frames, but it couldn’t track narratives, understand causality, or answer complex questions about video content. Now, thanks to falling GPU costs and annual performance gains of 15-20%, the technology can finally do the job.

$5.8 Million Seed Round to Fuel the AI Revolution

InfiniMind just secured $5.8 million in seed funding, led by UTEC and joined by CX2, Headline Asia, Chiba Dojo, and an AI researcher at a16z Scout. The company is relocating its headquarters to the U.S. while maintaining operations in Japan—a strategic move that leverages Japan’s strong hardware, talented engineers, and supportive startup ecosystem.

Japan served as the perfect testbed, allowing the team to fine-tune its technology with demanding customers before going global. Their first product, TV Pulse, launched in Japan in April 2025, and it’s already making waves. The AI-powered platform analyzes television content in real time, helping media and retail companies track product exposure, brand presence, customer sentiment, and PR impact.

After pilot programs with major broadcasters and agencies, TV Pulse already has paying customers, including wholesalers and media companies. But this is just the beginning.

DeepFrame: The Future of Video Intelligence

Now, InfiniMind is ready to take on the world with DeepFrame, its flagship long-form video intelligence platform. Scheduled for beta release in March and full launch in April 2026, DeepFrame can process 200 hours of footage to pinpoint specific scenes, speakers, or events with unprecedented accuracy.

Here’s the kicker: no code required. Clients bring their data, and InfiniMind’s system processes it, providing actionable insights. The platform integrates audio, sound, and speech understanding—not just visuals—and can handle unlimited video length. Most importantly, it solves the cost challenges that have plagued existing solutions.

The Battle for Video AI Supremacy

The video analysis space is highly fragmented, with companies like TwelveLabs offering general-purpose video understanding APIs for a broad range of users. But InfiniMind is laser-focused on enterprise use cases, including monitoring, safety, security, and analyzing video content for deeper insights.

“Most existing solutions prioritize accuracy or specific use cases but don’t solve cost challenges,” Kai explained. “We’re different. We’re building the infrastructure that makes dark data actionable.”

The Ultimate Goal: Understanding Reality Itself

This isn’t just about making businesses more efficient—it’s about pushing the boundaries of technology to better understand reality and help humans make better decisions. Kai sees this as “one of the paths toward AGI” (Artificial General Intelligence).

“Understanding general video intelligence is about understanding reality,” he said. “Industrial applications are important, but our ultimate goal is to push the boundaries of technology to better understand reality and help humans make better decisions.”

With the seed funding, InfiniMind plans to continue developing the DeepFrame model, expand engineering infrastructure, hire more engineers, and reach additional customers across Japan and the U.S. The future of video data is here—and it’s brighter than ever.


Tags: #InfiniMind #AI #DarkData #VideoAnalytics #MachineLearning #TechCrunch #Startup #VentureCapital #DeepFrame #TVPulse #AGI #FutureOfTech #DataScience #Innovation #BusinessIntelligence

Viral Sentences:

  • “Turning petabytes of forgotten footage into gold—one AI at a time.”
  • “The future of video intelligence is here, and it’s called DeepFrame.”
  • “Dark data? More like data gold waiting to be mined.”
  • “Two ex-Googlers are about to change how the world sees video forever.”
  • “No code? No problem. Just bring your data and let AI do the rest.”
  • “Understanding reality, one frame at a time.”
  • “The ultimate goal? AGI. The path? Video intelligence.”
  • “Japan’s startup ecosystem just birthed the next big thing in AI.”
  • “200 hours of footage? DeepFrame can find what you need in seconds.”
  • “The battle for video AI supremacy is on—and InfiniMind is leading the charge.”

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *