OpenClaw Had 210,000 GitHub Stars. Then Anthropic Shipped One Feature.
February 26, 2026
Two months ago, I hacked together a weekend project. What started as WhatsApp Relay now has over 100,000 GitHub stars and drew 2 million visitors in a single week.
That is how Peter Steinberger described OpenClaw in January 2026. By February, the project had crossed 210,000 stars. Developers called it the closest thing to JARVIS anyone had shipped. Then Anthropic quietly updated Claude Code.
This is the third article in an accidental series. We wrote about Ryze, a vertical SaaS product that disappeared overnight when Anthropic added a native connector. We wrote about Claude Code Security and what happens when AI systems get too much access without enough oversight. OpenClaw is both stories happening at the same time.
What OpenClaw Actually Was
Strip away the GitHub star count and the JARVIS comparisons and OpenClaw is a pragmatic control plane with a tool surface. Not a mystical AI brain. Not a multi-agent swarm. A single process that owns your messaging connections and lets an LLM call tools on your behalf.
It runs on your own machine or server and connects directly to tools people already use: email, calendars, files, browsers, and popular messaging platforms like Slack, Telegram, WhatsApp, Discord, and more. Instead of opening a browser tab to talk to an AI, you just send a WhatsApp message. Your assistant is always on, always local, always yours.
The pitch was privacy-first and genuinely different: your machine, your keys, your data. No cloud subscription, no data leaving your device, no vendor lock-in. For developers who had grown skeptical of cloud-everything, this hit a nerve.
The naming history alone became a meme. The project launched as Clawd, a pun on Claude with a lobster claw. Anthropic's legal team politely asked for a rename. It became Moltbot briefly, then finally OpenClaw. Each rebrand pulled more attention. By the time it stabilized, it had more GitHub stars than most projects accumulate in years.
The Email Incident
The enthusiasm had a shadow.
OpenClaw's power came from the same thing that made it dangerous. A developer who gives it access to Gmail is giving an autonomous AI agent the ability to read, write, and delete emails without confirming each action. For experienced developers who set appropriate guardrails, this is manageable. For everyone else, it is a loaded gun.
The incident that circulated most widely: a user asked OpenClaw to clear an inbox. It started deleting everything. Not archiving. Not sorting. Deleting, at speed, across years of correspondence.
OpenClaw's own documentation acknowledged the problem plainly. Prompt injection, where a malicious instruction hidden inside an email or document hijacks the agent's behavior, is an industry-wide unsolved problem. The project published security best practices. It added allowlists and permission controls. But the fundamental tension between "agent that acts" and "agent that asks permission first" was never fully resolved.
This is exactly the argument we made in the Claude Code Security article. The human-in-the-loop problem does not disappear when the software is open source. It gets harder because the user is now responsible for their own guardrails with no safety net.
What Claude Code Remote Control Actually Changed
In February 2026, Anthropic shipped Claude Code Remote Control. The feature allows a developer to start a coding session in their terminal and continue controlling it from their phone while the session keeps running locally.
Read that again slowly.
You start Claude Code on your machine. You step away. You send instructions from your phone. Claude executes them on your local environment. Your files, your terminal, your codebase, all accessible from a messaging interface.
That is OpenClaw's core value proposition. Not a similar product. The same idea, built natively into a tool that developers already have installed, already trust, already pay for, with Anthropic's safety infrastructure underneath it.
OpenClaw required setup. It required configuration. It required understanding allowlists and permission models and security best practices. Claude Code Remote Control required nothing new. If you already had Claude Code, you already had this.
The moat that OpenClaw built over two months, the GitHub stars, the community, the plugins, the documentation, did not matter. Distribution won. Not product.
The Pattern Is Not Subtle Anymore
This is the third time in one week we have watched the same thing happen.
Ryze built a product that connected AI to Google and Meta ad accounts. Anthropic added a native connector. Close rate went from 70% to 20% overnight.
OpenClaw built a product that connected AI to your messaging platforms and local environment. Anthropic shipped Remote Control. The category collapsed.
CrowdStrike and Palo Alto saw their stock prices fall the same week Anthropic shipped Claude Code Security, a tool for scanning codebases for vulnerabilities that overlapped directly with their enterprise offerings.
The pattern is not that Anthropic is deliberately targeting specific products. The pattern is that foundation model companies are expanding the definition of what a foundation model does. Every expansion absorbs categories that startups and open source projects had claimed.
This is the Indian IT parallel we drew in the Ryze article, playing out in real time across different product categories. The Indian IT industry built a $250 billion business on repeatable, well-defined tasks. Those tasks are exactly what AI absorbs fastest because they are well-specified enough to automate. OpenClaw automated well-defined messaging and local execution tasks. Claude Remote Control does the same thing with less friction.
The developers who saw this coming are already asking the right question: what is left that foundation models cannot absorb with a single feature update?
What Actually Survives
Steinberger's own answer, embedded in OpenClaw's Vision document, is worth reading. The project explicitly rejects agent-hierarchy frameworks and heavy orchestration layers. It chose simple, serialized, debuggable architecture on purpose. Not because complex orchestration is impossible but because it creates systems that are hard to trust.
That design philosophy, prioritizing human understanding over autonomous capability, is the thing Claude Code Remote Control does not replace. Anthropic can ship a feature that controls your terminal from your phone. It cannot ship a project that teaches you why that should make you nervous.
OpenClaw's community, the developers who understood its internals, who contributed security improvements, who wrote the allowlist documentation, those people are not replaced by Claude Code Remote Control. They are the ones who know why it matters.
The products that survive this consolidation are the ones that encode knowledge Claude cannot surface with a connector. Not automation. Not integration. Understanding.
One More Thing
OpenClaw started as a weekend hack. Crossed 100,000 GitHub stars in two months. Its creator got hired by OpenAI. The project moved to a foundation. The lobster mascot survived every rebrand.
That is not a failure story. That is what happens when you build something real fast enough for the right people to notice.
The question for every developer building on top of AI infrastructure right now is not whether this will happen to them. It is whether they will be the person who gets hired by OpenAI or the person still maintaining the codebase after the category disappears.
Build things Claude cannot replace. Or build them fast enough that someone hires you before it does.
This is the third article in an unplanned series on how AI is reshaping what developers build and who gets to build it. The first was Claude Code Security: The Argument for Human-in-the-Loop Just Got Harder. The second was Claude Killed My Startup. We did not plan a series. The news planned it for us.