What AI Models for War Actually Look Like
AI Startup Smack Technologies Raises $32 Million to Build Military-Grade AI Models
Silicon Valley is witnessing a new arms race in artificial intelligence, but this time the battlefield is literal. Smack Technologies, a startup developing specialized AI for military applications, has secured a $32 million funding round to create models that its founders claim will soon surpass the capabilities of leading commercial AI systems like Anthropic’s Claude in planning and executing military operations.
The startup’s approach stands in stark contrast to companies like Anthropic, which have drawn controversy for their stance on military applications of AI. While Anthropic recently clashed with the Department of Defense over contract terms—resulting in Defense Secretary Pete Hegseth declaring the company a “supply chain risk”—Smack Technologies appears to have no such reservations about developing AI for military use.
“I think the people who deploy the technology and make sure it is used ethically need to be in a uniform,” says CEO Andy Markoff, a former commander in the US Marine Forces Special Operations Command who executed high-stakes special forces operations in Iraq and Afghanistan. His co-founders include Clint Alanis, another ex-Marine, and Dan Gould, a computer scientist who previously served as VP of Technology at Tinder.
The AlphaGo Approach to Military Planning
Smack’s AI models learn to identify optimal mission plans through a process remarkably similar to how Google trained its 2017 program AlphaGo, which famously defeated world champions at the ancient board game Go. The startup runs its models through various war game scenarios, with expert military analysts providing feedback that tells the model whether its chosen strategy will succeed.
This reinforcement learning approach represents a significant departure from the general-purpose training of models like Claude. While those systems excel at tasks like report summarization, Markoff argues they lack the specialized knowledge and physical world understanding necessary for military applications.
“I can tell you they are absolutely not capable of target identification,” Markoff states bluntly. “No one that I’m aware of in the Department of War is talking about fully automating the kill chain.”
The Military AI Landscape
The controversy surrounding Anthropic’s military contracts has highlighted the growing tension between Silicon Valley’s AI companies and the defense establishment. The breakdown centered partly on Anthropic’s desire to limit use of its models in autonomous weapons—a restriction that Smack Technologies appears unconcerned with implementing.
Markoff suggests this debate misses a crucial point: today’s large language models simply aren’t optimized for military use. General-purpose models, despite their impressive capabilities in civilian contexts, lack training on military data and struggle with the physical reasoning required for military hardware control.
The US military already employs autonomous systems in specific contexts, particularly in missile defense systems that must react at superhuman speeds. According to Rebecca Crootof, an authority on autonomous weapons law at the University of Richmond School of Law, “The US and over 30 other states are already deploying weapon systems with varying degrees of autonomy, including some I would define as fully autonomous.”
From Whiteboards to AI-Powered Planning
Smack’s immediate focus is on automating the often tedious process of military mission planning. Currently, planning typically involves manual work with whiteboards and notepads—a process that Smack aims to streamline through specialized AI assistance.
The startup’s models are designed to help commanders by handling much of the computational heavy lifting involved in mission planning, potentially freeing human decision-makers to focus on higher-level strategic considerations.
However, the implications become more profound when considering potential conflicts with near-peer adversaries like Russia or China. Markoff suggests that in such scenarios, automated decision-making could provide the US with a critical “decision dominance” advantage.
The Risks of AI in Military Decision-Making
Not everyone is convinced that AI can reliably handle military decision-making, especially in high-stakes scenarios. A recent experiment by a researcher at King’s College London found that large language models tended to escalate nuclear conflicts in war game simulations, raising serious questions about AI’s suitability for military applications.
This finding underscores the complex ethical and practical challenges of deploying AI in military contexts. While proponents argue that specialized military AI could enhance decision-making and operational efficiency, critics worry about the potential for unintended escalation and the difficulty of ensuring AI systems behave as intended in chaotic, real-world combat scenarios.
Funding and Future Plans
The $32 million funding round will enable Smack Technologies to continue training its specialized models, with the company already spending millions on its initial AI development. While this budget pales in comparison to the resources available to major frontier AI labs, the focused nature of Smack’s mission—building AI specifically optimized for military applications—may allow it to achieve competitive results despite more limited resources.
As the debate over military AI continues to evolve, Smack Technologies represents a significant bet that specialized, defense-focused AI development represents a viable and valuable path forward. Whether this approach will ultimately prove successful—and whether society will accept the implications of increasingly autonomous military systems—remains one of the most pressing questions in both AI development and military strategy.
Tags:
AI military applications, autonomous weapons, defense technology, military AI, Smack Technologies, Andy Markoff, Anthropic controversy, defense contracting, AI ethics, war games, special forces technology, Silicon Valley defense, reinforcement learning military, autonomous decision-making, Pentagon AI, military contracting
Viral Sentences:
AI startup raises $32M to build military models that surpass Claude’s capabilities; Former Marine commander leads AI company with no limits on military use; The US military is already deploying autonomous weapons systems across 30+ countries; Specialized military AI could give US “decision dominance” in war with Russia or China; Large language models tend to escalate nuclear conflicts in war game simulations; Silicon Valley’s AI ethics debate collides with Pentagon’s military needs; Anthropic declared “supply chain risk” after refusing autonomous weapons contracts; Military mission planning still done with whiteboards and notepads in 2024; AlphaGo-style training coming to military AI development; Defense contractors are building AI that can identify targets better than civilian models; The next AI arms race is happening on real battlefields, not in labs; Former Tinder VP now building AI for Marine special operations; $32M bet that specialized military AI beats general-purpose models; Pentagon wants AI that can make split-second decisions in combat; Military AI developers say current LLMs can’t handle physical world reasoning; The kill chain automation debate misses the point about current AI limitations; Marine veterans launching AI startup to revolutionize military planning; Silicon Valley splits between AI ethics advocates and military technology developers; Specialized military AI models learning through war game trial and error.
,




Leave a Reply
Want to join the discussion?Feel free to contribute!