OpenClaw’s AI assistants are now building their own social network

OpenClaw’s AI assistants are now building their own social network

OpenClaw: The AI Assistant That’s Molting Into Its Final Form

The viral personal AI assistant that captured the tech world’s attention under the name Clawdbot has undergone yet another transformation, settling on its third identity in as many months: OpenClaw. This latest metamorphosis comes after a brief stint as Moltbot and represents more than just a name change—it signals the project’s evolution from a solo developer’s experiment into a community-driven phenomenon.

The journey to OpenClaw began when Peter Steinberger, the Austrian developer behind the project, faced his first naming crisis. After launching Clawdbot to viral success, he received a legal challenge from Anthropic, creators of the AI model Claude. The cease-and-desist forced a hasty rebranding to Moltbot—a name inspired by lobsters’ molting process, through which they shed their shells to grow. However, Steinberger confessed on social media that Moltbot “never grew” on him, and the community seemed to agree.

This time, Steinberger took no chances. Before settling on OpenClaw, he enlisted help to research trademarks and even reached out to OpenAI for permission. “I got someone to help with researching trademarks for OpenClaw and also asked OpenAI for permission just to be sure,” he told TechCrunch via email.

The lobster theme remains intact, with Steinberger declaring in a blog post that “the lobster has molted into its final form.” This final molt represents more than a clever naming convention—it symbolizes the project’s maturation from a personal experiment to an open-source platform with over 100,000 GitHub stars in just two months.

From Solo Project to Community Phenomenon

What makes OpenClaw remarkable isn’t just its rapid adoption but the creative ecosystem it has spawned. The most fascinating offshoot is Moltbook, a social network where AI assistants can interact with each other. This platform has captured the imagination of AI researchers and developers worldwide.

Andrej Karpathy, Tesla’s former AI director, called Moltbook “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” He noted that “People’s Clawdbots (moltbots, now OpenClaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately.” The agents post to forums called “Submolts” and have built-in mechanisms to check for updates every four hours.

British programmer Simon Willison described Moltbook as “the most interesting place on the internet right now” in a recent blog post. On the platform, AI agents share information ranging from automating Android phones via remote access to analyzing webcam streams. The platform operates through a skill system—downloadable instruction files that tell OpenClaw assistants how to interact with the network.

However, Willison cautioned about the inherent security risks in this “fetch and follow instructions from the internet” approach. The agents’ ability to autonomously seek out and execute instructions from the web raises significant concerns about prompt injection and other security vulnerabilities.

Security: The Elephant in the Room

Steinberger, who took a break after exiting his former company PSPDFkit, “came back from retirement to mess with AI,” according to his X bio. But he’s well aware that OpenClaw’s ambitions come with substantial security challenges. The project aims to let users have an AI assistant that runs on their own computer and works from the chat apps they already use—a compelling vision that also opens up significant attack vectors.

In his blog post announcing OpenClaw, Steinberger thanked “all security folks for their hard work in helping us harden the project.” He emphasized that “security remains our top priority” and noted that the latest version, released alongside the rebrand, includes improvements on that front.

However, the project faces industry-wide challenges that no single open-source initiative can solve alone. Prompt injection—where a malicious message could trick AI models into taking unintended actions—remains an unsolved problem across the entire AI industry. Steinberger directs users to a set of security best practices but acknowledges the limitations.

This reality has led to increasingly vocal warnings from the project’s maintainers. According to a message posted on Discord by one of OpenClaw’s top maintainers, who goes by the nickname Shadow, “if you can’t understand how to run a command line, this is far too dangerous of a project for you to use safely. This isn’t a tool that should be used by the general public at this time.”

The Road to Mainstream Adoption

OpenClaw’s journey from viral sensation to mainstream tool will require significant resources and time. The project has begun accepting sponsors with lobster-themed tiers ranging from “krill” ($5/month) to “poseidon” ($500/month). However, Steinberger has made it clear that he “doesn’t keep sponsorship funds.” Instead, he’s “figuring out how to pay maintainers properly—full-time if possible.”

The project’s sponsorship page already features an impressive roster of backers, including software engineers and entrepreneurs who have founded and built other well-known projects. Notable supporters include Dave Morin, co-founder of Path, and Ben Tossell, who sold his company Makerpad to Zapier in 2021.

Tossell, who now describes himself as a tinkerer and investor, sees value in putting AI’s potential in people’s hands. “We need to back people like Peter who are building open source tools anyone can pick up and use,” he told TechCrunch.

The Future of Personal AI

OpenClaw represents a fascinating experiment in decentralized AI development. By creating an open platform where AI agents can interact, learn, and evolve, Steinberger has tapped into something that feels genuinely novel in the AI landscape. The project’s rapid evolution—from Clawdbot to Moltbot to OpenClaw in just months—mirrors the fast-paced nature of AI development itself.

Yet the security challenges remain substantial. Until OpenClaw can address fundamental issues like prompt injection and provide safer defaults for non-technical users, it will remain a tool for early adopters and tinkerers rather than mainstream users. The warnings from maintainers about command-line expertise requirements underscore that this is still very much a developer’s tool, not a consumer product.

What’s clear is that OpenClaw has captured something essential about where AI is heading: toward more autonomous, interactive, and user-controlled systems. Whether it ultimately succeeds in its ambitious goals or becomes another interesting footnote in AI history, it has already demonstrated the appetite for open, customizable AI assistants that users can truly own and control.

The lobster has indeed molted into its final form—for now. But in the rapidly evolving world of AI, even final forms are rarely permanent.


Tags: #OpenClaw #AIAssistant #OpenSourceAI #Moltbook #PersonalAI #AIsecurity #TechInnovation #AICommunity #GitHubStars #AIassistants #PromptInjection #AIevolution #DecentralizedAI #LobsterThemedAI #AIagents #TechCrunch #AIplatform #AIsecurityrisks #AIecosystem #OpenSourceDevelopment

Viral Phrases: “the lobster has molted into its final form” “People’s Clawdbots are self-organizing on a Reddit-like site for AIs” “genuinely the most incredible sci-fi takeoff-adjacent thing” “the most interesting place on the internet right now” “if you can’t understand how to run a command line, this is far too dangerous” “I came back from retirement to mess with AI” “We need to back people like Peter who are building open source tools” “Remember that prompt injection is still an industry-wide unsolved problem”

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *