The creator of Claude Code just revealed his workflow, and developers are losing their minds

The creator of Claude Code just revealed his workflow, and developers are losing their minds

The “Starcraft” Coding Revolution: How Anthropic’s Boris Cherny Is Rewriting the Rules of Software Development

When Boris Cherny, the creator and head of Claude Code at Anthropic, casually shared his terminal setup on X last week, he inadvertently dropped a tactical nuke on Silicon Valley’s collective consciousness. What followed wasn’t just another viral thread—it was a fundamental reimagining of what it means to be a software engineer in the age of AI.

The engineering community didn’t just listen; they scrambled to take notes, screenshot Cherny’s workflow, and immediately begin implementing his strategies. Industry insiders are calling it a watershed moment that could redefine how startups approach software development entirely.

“If you’re not reading the Claude Code best practices straight from its creator, you’re behind as a programmer,” wrote Jeff Tang, a prominent voice in the developer community. Kyle McNease, another industry observer, went even further, declaring that with Cherny’s “game-changing updates,” Anthropic is “on fire,” potentially facing “their ChatGPT moment.”

The excitement stems from a paradox: Cherny’s workflow is surprisingly simple in concept, yet it allows a single human to operate with the output capacity of a small engineering department. As one user noted after implementing Cherny’s setup, the experience “feels more like Starcraft” than traditional coding—a shift from typing syntax to commanding autonomous units.

Here’s an analysis of the workflow that’s reshaping how software gets built, straight from the architect himself.

From Linear Coding to Real-Time Strategy: The Five-Agent Fleet

The most striking revelation from Cherny’s disclosure is that he doesn’t code in a linear fashion. In the traditional “inner loop” of development, a programmer writes a function, tests it, and moves to the next. Cherny, however, acts as a fleet commander.

“I run 5 Claudes in parallel in my terminal,” Cherny wrote. “I number my tabs 1-5, and use system notifications to know when a Claude needs input.”

By utilizing iTerm2 system notifications, Cherny effectively manages five simultaneous work streams. While one agent runs a test suite, another refactors a legacy module, and a third drafts documentation. He also runs “5-10 Claudes on claude.ai” in his browser, using a “teleport” command to hand off sessions between the web and his local machine.

This validates the “do more with less” strategy articulated by Anthropic President Daniela Amodei earlier this week. While competitors like OpenAI pursue trillion-dollar infrastructure build-outs, Anthropic is proving that superior orchestration of existing models can yield exponential productivity gains.

The shift is profound: instead of being a single point of execution, the developer becomes a strategic commander allocating resources across multiple simultaneous operations. It’s not about typing faster—it’s about thinking broader.

The Counterintuitive Case for the Slowest, Smartest Model

In a surprising move for an industry obsessed with latency, Cherny revealed that he exclusively uses Anthropic’s heaviest, slowest model: Opus 4.5.

“I use Opus 4.5 with thinking for everything,” Cherny explained. “It’s the best coding model I’ve ever used, and even though it’s bigger & slower than Sonnet, since you have to steer it less and it’s better at tool use, it is almost always faster than using a smaller model in the end.”

For enterprise technology leaders, this is a critical insight. The bottleneck in modern AI development isn’t the generation speed of the token; it’s the human time spent correcting the AI’s mistakes. Cherny’s workflow suggests that paying the “compute tax” for a smarter model upfront eliminates the “correction tax” later.

This represents a fundamental shift in how we evaluate AI performance. The traditional metric of tokens per second becomes less relevant when you’re measuring developer throughput. A slower model that requires less supervision and produces better results can actually accelerate the entire development pipeline.

The Self-Correcting Codebase: Turning Mistakes into Institutional Memory

Cherny also detailed how his team solves the problem of AI amnesia. Standard large language models don’t “remember” a company’s specific coding style or architectural decisions from one session to the next.

To address this, Cherny’s team maintains a single file named CLAUDE.md in their git repository. “Anytime we see Claude do something incorrectly we add it to the CLAUDE.md, so Claude knows not to do it next time,” he wrote.

This practice transforms the codebase into a self-correcting organism. When a human developer reviews a pull request and spots an error, they don’t just fix the code; they tag the AI to update its own instructions. “Every mistake becomes a rule,” noted Aakash Gupta, a product leader analyzing the thread. The longer the team works together, the smarter the agent becomes.

The implications are profound: instead of institutional knowledge being trapped in the heads of senior engineers or buried in documentation that no one reads, it becomes codified in a format that the AI itself can learn from. The codebase evolves not just through human commits but through AI instruction refinement.

Automation at Scale: Slash Commands and Specialized Subagents

The “vanilla” workflow one observer praised is powered by rigorous automation of repetitive tasks. Cherny uses slash commands—custom shortcuts checked into the project’s repository—to handle complex operations with a single keystroke.

He highlighted a command called /commit-push-pr, which he invokes dozens of times daily. Instead of manually typing git commands, writing a commit message, and opening a pull request, the agent handles the bureaucracy of version control autonomously.

Cherny also deploys subagents—specialized AI personas—to handle specific phases of the development lifecycle. He uses a code-simplifier to clean up architecture after the main work is done and a verify-app agent to run end-to-end tests before anything ships.

This represents a fundamental shift in how we think about software development tools. Instead of building better hammers and screwdrivers, Cherny is building an entire automated construction crew. The slash commands and subagents become the operating system for this crew, allowing the human to focus on strategy rather than execution.

The Verification Loop: AI That Tests Its Own Work

If there’s a single reason Claude Code has reportedly hit $1 billion in annual recurring revenue so quickly, it’s likely the verification loop. The AI isn’t just a text generator; it’s a tester.

“Claude tests every single change I land to claude.ai/code using the Claude Chrome extension,” Cherny wrote. “It opens a browser, tests the UI, and iterates until the code works and the UX feels good.”

He argues that giving the AI a way to verify its own work—whether through browser automation, running bash commands, or executing test suites—improves the quality of the final result by “2-3x.” The agent doesn’t just write code; it proves the code works.

This verification capability is what separates Claude Code from earlier AI coding tools. It’s not just about generating plausible-looking code; it’s about generating code that actually works and meets the specified requirements. The AI becomes both the author and the editor, closing the loop on quality assurance.

What This Means for the Future of Software Engineering

The reaction to Cherny’s thread suggests a pivotal shift in how developers think about their craft. For years, “AI coding” meant an autocomplete function in a text editor—a faster way to type. Cherny has demonstrated that it can now function as an operating system for labor itself.

“Read this if you’re already an engineer… and want more power,” Jeff Tang summarized on X.

The tools to multiply human output by a factor of five are already here. They require only a willingness to stop thinking of AI as an assistant and start treating it as a workforce. The programmers who make that mental leap first won’t just be more productive. They’ll be playing an entirely different game—and everyone else will still be typing.

This represents more than just a productivity boost; it’s a fundamental redefinition of the software engineering profession. The skills that matter shift from typing speed and syntax memorization to system architecture, strategic thinking, and AI orchestration. The best engineers of tomorrow won’t be those who can write the most elegant code by hand; they’ll be those who can most effectively direct and verify AI-generated code.

The implications extend beyond individual productivity. Companies that adopt these workflows could see their engineering output increase by orders of magnitude without proportional increases in headcount. This could accelerate innovation cycles, reduce time-to-market, and fundamentally change the economics of software development.

As one industry observer put it: “We’re not just getting faster at building software. We’re getting faster at building the future.”


Tags & Viral Phrases:

  • “feels more like Starcraft”
  • “game-changing updates”
  • “on fire”
  • “their ChatGPT moment”
  • “do more with less”
  • “compute tax”
  • “correction tax”
  • “self-correcting organism”
  • “institutional memory”
  • “automated construction crew”
  • “operating system for labor”
  • “AI orchestration”
  • “verification loop”
  • “orders of magnitude”
  • “building the future”
  • “fleet commander”
  • “multiply human output”
  • “AI workforce”
  • “strategic commander”
  • “AI amnesia”
  • “institutional knowledge”
  • “codebase evolves”
  • “author and editor”
  • “redefinition of the software engineering profession”
  • “accelerate innovation cycles”
  • “reduce time-to-market”
  • “fundamentally change the economics”
  • “playing an entirely different game”
  • “watershed moment”
  • “reshaping how software gets built”
  • “single human to operate with the output capacity of a small engineering department”
  • “shift from typing syntax to commanding autonomous units”
  • “superior orchestration of existing models”
  • “exponential productivity gains”
  • “willingness to stop thinking of AI as an assistant”
  • “mental leap”
  • “fundamental reimagining”

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *