
Currently, Anthropic and OpenAI hold a kill switch that can stop the spread of potentially harmful AI agents. OpenClaw primarily runs on their APIs, which means the AI models performing the agentic actions reside on their servers. Its GitHub repository recommends “Anthropic Pro/Max (100/200) + Opus 4.5 for long-context strength and better prompt-injection resistance.”
Most users connect their agents to Claude or GPT. These companies can see API usage patterns, system prompts, and tool calls. Hypothetically, they could identify accounts exhibiting bot-like behavior and stop them. They could flag recurring timed requests, system prompts referencing “agent” or “autonomous” or “Moltbot,” high-volume tool use with external communication, or wallet interaction patterns. They could terminate keys.
If they did so tomorrow, the OpenClaw network would partially collapse, but it would also potentially alienate some of their most enthusiastic customers, who pay for the opportunity to run their AI models.
The window for this kind of top-down intervention is closing. Locally run language models are currently not nearly as capable as the high-end commercial models, but the gap narrows daily. Mistral, DeepSeek, Qwen, and others continue to improve. Within the next year or two, running a capable agent on local hardware equivalent to Opus 4.5 today might be feasible for the same hobbyist audience currently running OpenClaw on API keys. At that point, there will be no provider to terminate. No usage monitoring. No terms of service. No kill switch.
API providers of AI services face an uncomfortable choice. They could intervene now, while intervention is still possible. Or they can wait until a prompt worm outbreak might force their hand, by which time the architecture may have evolved beyond their reach.
The Morris worm prompted DARPA to fund the creation of CERT/CC at Carnegie Mellon University, giving experts a central coordination point for network emergencies. That response came after the damage. The Internet of 1988 had 60,000 connected computers. Today’s OpenClaw AI agent network already numbers in the hundreds of thousands and is growing daily.
Today, we might consider OpenClaw a “dry run” for a much larger challenge in the future: If people begin to rely on AI agents that talk to each other and perform tasks, how can we keep them from self-organizing in harmful ways or spreading harmful instructions? Those are as-yet unanswered questions, but we need to figure them out quickly, because the agentic era is upon us, and things are moving very fast.






