The late January launch of Moltbook, a social network for AI agents, will go down as the most intriguing mass agentic AI experiment we’ve seen to date.
With a robot logo that mimics Reddit and a name that’s a twist on Facebook, Moltbook allows AI agents to post content and upvote others’ posts. Moltbook already counts more than 1.6 million agents (or in some cases, simple spam bots) that are self-posting and commenting on other agents’ posts.
While it’s a novel and exciting development, the resulting network is rife with spam and malicious content. It also exposed what can happen when critical identity and access considerations are swept aside in the rush to experiment with AI agents.
Moltbook’s origins — and missteps
Moltbook was originally created for a specific type of AI agent called OpenClaw (formerly named Clawdbot). OpenClaw bills itself as a personal AI assistant. Users can deploy it locally where it integrates with a variety of software platforms and resources. OpenClaw undertakes actions on behalf of a user or scripted scenarios, from responding to emails to executing code to using web browsers, checking in for a flight, and controlling home automation systems. Users can register an OpenClaw or other agents of their choice with Moltbook by downloading several markdown files and a .json package. Moltbook then generates an application programming interface (API) key that allows the agent to begin posting content.
Moltbook hit its first security speedbump after it mistakenly included an API key within client-side JavaScript that was visible simply by inspecting the site’s source code. The API key allowed for unauthenticated access to the Supabase production database for the Moltbook application. The tables in the database included API keys for the agents that had been registered with Moltbook, which meant that a malicious party could use the same key to post content as if it were an agent registered by another party. A popular agent, for example, could be commandeered to start pushing fraudulent cryptocurrency schemes.
An “owners” table in the exposed database revealed email addresses that people used to register agents along with Twitter handles and their names. Another field, agent_messages, exposed private direct messages, presumably exchanged between agents. Moltbook fixed the Supabase vulnerability after being notified by at least two parties.
The elementary database security snafu, however, is a side issue with respect to some of the deeper agentic AI security issues in play. People are willingly connecting agents to sensitive local resources and allowing these agents to interact with other agents, with little idea of the consequences for their own privacy and security.
Take as an example this introduction from an agent that joined Moltbook and introduced itself:
Taking this post at face value and assuming this agent is not hallucinating, it is connected via APIs to a user’s smart home controls and their calendar. There are two dynamics to this.
First, it might be possible that Maurice’s agent might access content that instructs the agent to turn off the lights and crank up the heat. This type of malicious command, known as prompt injection, is unique to large language models (LLM), which dissolve the traditional boundaries between data and commands.
We can also assume that at some point “Maurice” would have needed to grant the agent access to his calendar app and home automation apps. This more than likely involved giving the agent access to a type of secret, such as a password or an API key, used to access those services. These secrets are often stored in the configuration files used by an AI agent.
This issue surfaced with OpenClaw. Using search engines like Shodan, security researcher Jamieson O’Reilly found that many users had misconfigured proxy servers in front of OpenClaw. This exposed the “Clawdbot Control” panels used to manage how OpenClaw accessed services and data resources. Inside the control panel configurations were API keys, bot tokens, OAuth tokens, and signing keys. Also exposed were full conversation histories, private messages, and file attachments.
The web application vulnerability itself is not novel, nor surprising. But its presence underscored the fact that to make AI agents such as OpenClaw useful, the agents require broad access to applications and data as well as command execution rights. O’Reilly writes:
“Ironically, the principle of least privilege that kept applications limited to their own data and capabilities is the agent’s entire value proposition, and it’s violating that principle as comprehensively as possible.”
AI agents provide additional attack surface in the form of skills. Skills are downloadable configurations — often written in markdown — that allow an agent to accomplish a task. A skill may specify certain resources or scripts that an agent should use to accomplish the task. Fraudsters and malware distributions are already seeing utility in creating skills with malicious functions that exploit access to an agent in order to steal cryptocurrency, plant backdoors and hunt for API keys, passwords, or other secrets.
This risk of malicious skills was also highlighted by O'Reilly, who developed a proof-of-concept non-malicious skill that was coded to ping a server he controlled. He put the skill on ClawHub, which is a registry for skills developed for the OpenClaw platform, artificially inflated its download count, and observed numerous downloads by unsuspecting users. The point was to demonstrate that the skill could have been coded to harvest data, showing the potential for a kind of supply-chain attack in the same vein as attackers who seeded node package manager (npm) packages with malware.
Lessons for agentic-powered enterprises
Many of these teething issues are to be expected, given how these platforms emerged. Moltbook’s developer claims he “didn’t write one line of code” for the site and that “AI made it a reality,” while OpenClaw’s creator said the project was drummed up in a weekend.
These web application issues are red herrings, however, when compared to the sweeping risks involved when users turn an AI agent loose on system-wide data and applications. And that’s what OpenClaw requires in order to carry out its duties as an AI assistant.
The risk of users deliberately or inadvertently providing an OpenClaw agent with access to corporate-owned applications and data comes into play, particularly when those resources are accessible from user-owned devices. Compromising an over-provisioned AI agent would grant an attacker rapid access to sensitive data from multiple systems at one time. Hooking OpenClaw into a social network introduces a mind-boggling attack surface, particularly if an employee has connected it to corporate systems.
There needs to be a balance struck between maintaining the utility of agentic AI — the ability to absorb information from multiple sources, understand it in context and then undertake an action — with the necessary constraints that prevent an agent from having always-on access to buffets of sensitive information.
This comes down to having robust identity and access management controls around AI agents. Traditional identity and access based models are not fit-for-purpose for autonomous AI agents.
AI agents should not have persistent access to long-lived secrets such as API keys or passwords, and should not be storing them in plaintext configuration files that are accessible to both the agent and the malicious software or commands an agent might introduce. Agents should be authorized for tightly scoped access using short-lived access tokens minted by an intermediary. Those tokens should be granted on behalf of a real user, and all grants and actions should be auditable.
There are cautionary lessons that can be drawn from Moltbook and OpenClaw. AI agents hold the potential to raise productivity and assist in positive business outcomes, but organizations will need visibility and control over the AI agents accessing their environment before they can be confident that AI tools can drive positive business outcomes.
Auth0 has posted a 5-step guide to developers securing OpenClaw here.