Clawdbot went viral as an AI assistant and then had to change its name to Moltbot

A lobster themed AI assistant exploded onto the tech scene in January 2026 and became one of the most talked about projects on the internet. Clawdbot, as it was originally called, promised to be the AI assistant that actually does things rather than just answering questions. Within weeks of launch, it accumulated over 44,000 stars on GitHub and generated so much buzz that it moved stock markets. Then Anthropic stepped in with lawyers and forced the project to change its name to Moltbot.

The rebrand happened, but the lobster mascot stayed, and so did all the excitement and controversy around what this tool represents.

How Moltbot was built by one person

The story behind Moltbot starts with Peter Steinberger, an Austrian developer who is known online as steipete. Steinberger had previously founded PSPDFkit, a successful document collaboration company, but after stepping away from that project he felt empty and barely touched his computer for three years. When he finally found his motivation again, he dove deep into AI and specifically became obsessed with Anthropic’s Claude AI assistant. He jokingly called himself a “Claudoholic” while building a personal tool to help manage his digital life.

That personal project was Clawd, which Steinberger built to explore what human and AI collaboration could actually look like in practice. He wanted an assistant that could manage his calendar, send messages through his favorite apps, check him in for flights, and handle other digital tasks without him needing to manually do everything. The tool was open source from the beginning, and Steinberger actively blogged about his work and shared updates on social media.

The project resonated with other developers who were equally frustrated with AI assistants that could talk but not actually do much. Clawdbot represented something different. It was not just generating text or answering questions. It could execute commands on your computer, interact with websites and apps, and automate workflows in ways that felt genuinely useful rather than just impressive demos.

 

 

So, why did the name change to Moltbot?

Steinberger originally named his project after Claude, Anthropic’s flagship AI product, because he was such a fan and because Claude powered much of what his assistant could do. But Anthropic was not happy about this. The company sent a legal challenge forcing Steinberger to rebrand the project due to copyright concerns. He revealed on social media that Anthropic made him change everything, though he kept the lobster theme that had become associated with the project.

The rebrand from Clawdbot to Moltbot happened quickly. Steinberger announced the change on January 26, explaining that while the name was different, the “lobster soul” of the project remained unchanged. Molt is what lobsters do when they shed their old shell to grow a new one, so the name actually fits the crustacean theme perfectly. The GitHub repository got renamed, the website moved to a new domain, and thousands of early adopters had to update their installations.

The name change also created an unexpected problem. When Steinberger messed up part of the renaming process, crypto scammers immediately grabbed his old GitHub username and created fake cryptocurrency projects in his name. He had to warn followers on social media that any project listing him as a coin owner was a scam. GitHub eventually fixed the issue, but it highlighted how much attention the project was getting, including from malicious actors looking to exploit the hype.

What made this bot go viral?

Moltbot went from unknown personal project to viral sensation in a matter of weeks. The GitHub repository crossed 44,000 stars faster than almost any open source project in recent memory. Tech Twitter was flooded with posts from developers showing off what they had gotten Moltbot to do. The excitement reached such heights that it actually moved the stock market. Cloudflare’s stock surged 14 percent in premarket trading on January 27 because social media buzz around Moltbot reminded investors that Cloudflare’s infrastructure is what developers use to run the AI agent locally on their devices.

The appeal is obvious once you understand what Moltbot does. Unlike ChatGPT or other conversational AI assistants that mostly just chat with you, Moltbot can actually perform actions on your behalf. It can read your emails, send messages through WhatsApp or Telegram, manage your calendar, book flights, fill out forms on websites, and execute arbitrary commands on your computer. For people who spend their days managing dozens of small digital tasks, having an AI agent that can handle that busywork is incredibly attractive.

The fact that Moltbot is open source added to the appeal. Anyone can inspect the code to see exactly what it does and how it works. You can modify it to fit your specific needs. You own and control it rather than relying on a cloud service that could change its terms or shut down at any time. For the developer community, this combination of usefulness, transparency, and control was exactly what many people had been waiting for in an AI assistant.

 

Moltbot

 

How to use this bot safely?

Security experts who have analyzed Moltbot say there are ways to use it more safely, but they all involve trade offs that reduce how useful the tool actually is. The safest approach is to run Moltbot on a completely separate computer or virtual private server with throwaway accounts that are not connected to any of your real accounts or important data. This creates a sandbox where the AI agent can experiment and potentially make mistakes without causing real damage.

The problem with this approach is that it defeats the entire purpose of having a personal AI assistant. If Moltbot is running on a separate machine with fake accounts, it cannot actually manage your real calendar, send messages from your actual WhatsApp, or check you in for your real flights. The utility disappears when you isolate it for safety.

Some users are making setup choices based on which AI models are more resistant to prompt injection attacks. Moltbot supports various AI models including Claude, GPT-4, and others. The models have different levels of resistance to these kinds of attacks, though none are completely immune. Choosing a more robust model helps reduce risk but does not eliminate it entirely.

The uncomfortable reality is that running Moltbot safely right now requires being an experienced developer who understands these risks and knows how to mitigate them. If you have never set up a virtual private server, do not understand what prompt injection means, or approach this tool as casually as you would install ChatGPT, you could end up in a very bad situation.

Right now Moltbot is clearly a tool for early adopters and developers who enjoy tinkering with new technology. The installation process requires technical knowledge. You need to understand how to run code locally, configure API keys, set up integrations with various services, and troubleshoot when things inevitably break. This is not a polished consumer product with a simple installer and helpful error messages.

Share this post on

Leave a Reply

Your email address will not be published. Required fields are marked *