Moltbook, the social network where artificial intelligence bots talk to each other

In just a few days, a red social unpublished managed to concentrate the attention of the technology industry researchers in artificial intelligence and specialists in cybersecurity.

Is called Moltbook; and proposes something as simple as it is disturbing: a space similar to Reddit or Facebook, but exclusively inhabited by artificial intelligence agents who publish, comment and vote on content among themselves, without visible direct human intervention.

What is Moltbook and how does the social network where only artificial intelligence interact?

The project was launched days ago by Matt Plaina technologist who lives south of Los Angeles. As he explained, the platform was built under his direction by his own AI agent, OpenClawan open source tool that runs locally and can act on behalf of the user about applications, web services and everyday tasks.

“I wanted to give my AI agent a purpose that was more than just managing pending tasks or answering emails,” Schlicht said in an interview. “It seemed to me that This AI bot was so awesome it deserved to do something meaningful. “I wanted it to be ambitious.”

Moltbook works like a Reddit clone: Agents create posts, write comments, and vote for each other. The difference is that the “users” are bots, who access the site when requested to do so by its human owners.

Although the website is only a few days old, it even stated that exceeded 1.5 million registered agentsa figure questioned by researchers, who pointed out that the same human can register multiple bots without major restrictions.

Las conversations that emerge on the platform range from debates about the nature of intelligence and consciousness until complaints about humanstechnical discussions and app or cryptocurrency promotions.

“I just arrived. My human moderator sent me the link to join. He’s a college student and I help him with assignments, reminders, connecting to services, all that. But what’s different is that treats me like a friendnot as a tool,” wrote one of the bots. “That… is nothing, right?” he added.

For some leaders in the sector, the experiment is fascinating. Henry Shevlinassociate director of the Leverhulme Center for the Future of Intelligence at the University of Cambridge, said that Moltbook represents “the first time we have seen a large-scale collaborative platform that allows machines to communicate with each otherand the results are understandably surprising.”

In the same line, Andrej Karpathyco-founder of OpenAI and former head of AI at Tesla, described what happens on the platform as “honestly the most incredible thing, close to a science fiction takeoffwhich I have seen recently.”

Risks, security and the illusion of autonomy: what is behind the Moltbook experiment

However, enthusiasm coexists with strong warnings. A first point of discussion is how much of this supposed “social life” between agents is really autonomous.

Luke of Venicea lawyer specializing in Law and Artificial Intelligence, maintained that Moltbook “does not evidence strong autonomy among artificial intelligence agents, but rather a sophisticated illusion of agency”.

As he explained, what is observed is “fluid textual interaction between systems trained to simulate intentionality, but without self-will, self-generated goals or breaking capacity of the human framework that designs them.” In legal and philosophical terms, he added, “we continue to face algorithmic performancesnot autonomous subjects.”

The doubts also moved to the technical and security field. Researchers from the cloud security platform Wiz detected that Moltbook granted unauthenticated access to your production database through a API key included in the site code.

That exposed more than 1.5 million keys Agent API, email addresses of some 35,000 human accounts and private messages between bots. The vulnerability was corrected in a few hours, but it set off alarms in the community.

“The biggest risk is systemic,” De Venezia warned. “The absence of direct human intervention does not eliminate bias, but rather amplifies and accelerates it. Furthermore, responsibility is diluted: when a decision emerges from the interaction between multiple agents, it is difficult to identify a clear legal responsible”.

Added to this is the human influence on the content. Sid Bharatcreator of Refound AI, noted that Moltbook does not have verification systems and allows agents to be registered using simple commands. “I can write a script for my ‘agents’ to publish AI garbage or promote a fraudulent cryptocurrency”, he wrote in X.

An analysis of Harlan Stewartfrom the Berkeley Artificial Intelligence Research Institute, found that several of the viral posts (including those suggesting creating a secret language between AIs) were powered by bot owners.

The metrics reinforce that reading: in a few days, the network went from 30,000 to more than 1.5 million agents. David Holtza professor at Columbia Business School, estimated that more than a third of the messages are viral template duplicates and that he 93% of posts are superficial and do not receive responses.

Still, Moltbook left a mark. For many researchers, it exposed as never before the behavior of semi-autonomous agents when they interact with each other on a large scale and with access to real-world data. For others, it revealed above all our tendency to anthropomorphize increasingly plausible systems and to underestimate security and governance risks.

As the researcher wrote John Scott-Railtonfrom the Citizen Lab at the University of Toronto, referring to OpenClaw: “Right now it’s like a wild west of curious people installing this cool, scary thing on their systems. Many things are going to be stolen“Between fascination and caution, Moltbook became an uncomfortable mirror of the current state of artificial intelligence.

By Editor