The latest wave of AI excitement has brought us an unexpected mascot: a lobster. Clawdbot, a personal AI assistant, went viral within weeks of its launch and will retain its crustacean theme despite having to change its name to Moltbot following a legal challenge from Anthropic. But before you jump on the bandwagon, here’s what you need to know.
According to its tagline, Moltbot (formerly Clawdbot) is “the AI ​​that actually does things”—whether it’s managing your calendar, sending messages through your favorite apps, or checking you in for flights. This promise has attracted thousands of users willing to tackle the necessary technical setup, even though it started as a scrappy personal project built by a developer for his own use.
That man is Peter Steinberger, an Austrian developer and founder who is known online as @steipete and actively blogs about his work. After stepping away from his previous project, PSPDFkit, Steinberger felt empty and barely touched his computer for three years, he explained on his blog. But eventually he found his spark again – which led to Moltbot.
Although Moltbot is now much more than a solo project, the publicly available version still originates from Clawd, “Peter’s crusted assistant,” now called Molty, a tool he built to help him “manage his digital life” and “explore what human-AI collaboration can be.”
For Steinberger, that meant delving deeper into the momentum around AI that had reignited his architectural spark. A self-confessed “Claudoholic,” he originally named his project after Anthropic’s flagship AI product, Claude. He revealed on X that Anthropic subsequently forced him to change the branding for copyright reasons. TechCrunch has reached out to Anthropic for comment. But the “lobster soul” of the project remains unchanged.
For its early adopters, Moltbot represents the vanguard of how helpful AI assistants can be. Those who were already excited by the prospect of using AI to quickly generate websites and apps are even more keen to have their personal AI assistant do the work for them. And like Steinberger, they are eager to tinker with it.
This explains how Moltbot so quickly amassed more than 44,200 stars on GitHub. So much viral attention has been paid to Moltbot that it has even moved markets. Cloudflare’s stock jumped 14% in premarket trading on Tuesday as social media buzz around the AI ​​agent reignited investor enthusiasm for Cloudflare’s infrastructure, which developers use to run Moltbot locally on their devices.
Techcrunch event
San Francisco
|
13.-15. October 2026
Still, it’s far from breaking out of early adopter territory, and that might be for the best. Installing Moltbot requires being tech savvy and that also includes awareness of the inherent security risks that come with it.
On the one hand, Moltbot is built with security in mind: it’s open source, meaning anyone can inspect its code for vulnerabilities, and it runs on your computer or server, not in the cloud. But on the other hand, its very premise is inherently risky. As entrepreneur and investor Rahul Sood pointed out at X, “‘actually doing things’ means ‘can execute arbitrary commands on your computer’.”
What keeps Sood up at night is “quick injection through content” – where a malicious person can send you a WhatsApp message that can cause Moltbot to perform unintended actions on your computer without your intervention or knowledge.
This risk can be partially mitigated by careful setup. Since Moltbot supports different AI models, users may want to make setup choices based on their resistance to these types of attacks. But the only way to prevent it completely is to run Moltbot in a silo.
This may be obvious to seasoned developers messing around with a weeks-old project, but some of them have become more vocal in warning users drawn in by the hype: things can get ugly quickly if they approach it as carelessly as ChatGPT.
Steinberger himself was served with a reminder that malicious actors exist when he “messed up” the renaming of his project. He complained on X that “crypto scammers” were hijacking his GitHub username and creating fake cryptocurrency projects in his name, and he warned followers that “any project that shows [him] as coin owner is a FIDEL.” He then wrote that the GitHub issue had been fixed, but warned that the legitimate X account is @moltbot, “not any of the 20 scam variants of it.”
This doesn’t necessarily mean you should stay away from Moltbot at this point if you’re curious about testing it out. But if you’ve never heard of a VPS—a virtual private server, which is essentially a remote computer you rent to run software—you might want to wait your turn. (This is where you might want to run Moltbot for now. “Not the laptop with your SSH keys, API credentials, and password manager,” Sood warned.)
Right now, running Moltbot securely means running it on a separate computer with discarded accounts, which defeats the purpose of having a useful AI assistant. And solving this trade-off between safety and utility may require solutions beyond Steinberger’s control.
Still, by building a tool to solve his own problem, Steinberger showed the developer community what AI agents could actually accomplish, and how autonomous AI could finally become truly useful instead of just impressive.
