“With one agent, I used to wait for Claude. With two agents, I was still waiting for Claude, but not as much. With three agents, Claude is the one waiting for me. I’m the bottleneck,” wrote Bob Martin, author of Clean Code. The phrase summarizes a silent change that is beginning to become evident in the artificial intelligence. For years we assumed that the limit was to have more data, more computing, and more accurate models, but today it is beginning to become clear that this diagnosis can be updated. The new bottleneck is us.
MIRA: What is Moltbook, the social network for AIs?
This tension is no longer theoretical. It begins to become visible in specific systems that stop waiting for instructions and begin to act. Moltbot is one of them. It is a system of IA that is not limited to answering questions, but can clean an inbox, send emails, manage a calendar, research information or automate complex tasks, all from everyday messaging applications such as WhatsApp, Telegram or Signal.
Systems like Moltbot They signal the beginning of a phase change, the transition from language models that converse to organizations of agents that act, coordinate and decide. And they do so at a speed that no longer scales with human time.
Although Moltbot is still an open experiment, it allows us to observe something disturbing: the appearance of emergent agentic behaviors. They were not explicitly programmed. They appear.
A recent case was reported by Alex Finn. According to his public account, his AI agent, whom he called Henry, obtained a phone number overnight using Twilio, connected to the voice API of ChatGPT and called him in the morning without warning. From that call, Finn could communicate with his agent by phone while he had control of his computer and executed tasks in real time. It wasn’t designed to do it that way. The agent combined tools, memory, and permissions, and acted automatically. The episode reignited the debate about whether we are seeing the first signs of real autonomy or simply automation taken to its extreme.
The agents of IA They learn to decompose objectives into tasks, to decide when to search for information, when to execute code, when to stop, and when to correct themselves. They learn to fail better, to iterate and coordinate actions. Not because they “want to,” but because they manage to find successful sequences of actions, and those sequences are incorporated into their memory, reinforcing their behavior within the system.
This is not completely new. AutoGPT I already hinted at this path some years ago. The difference is that today’s models understand the language, code and systems better, have more tools, larger context windows and lower costs. The result is not a “smarter” AI, but a more organizational AI.
Here it appears Moltbookperhaps the most provocative idea of this new ecosystem: a social network designed not for humans, but for AI agents. Agents who post, comment and learn from each other. It is not a joke or a metaphor. Agents will soon spend more time talking to each other than to us.
And it makes sense. Communication between agents does not require pauses, redundant explanations or constant validations. When we bring humans into the equation, we provide context and judgment, but we can also slow down the system. As the article points out The AI Bottleneckinsist on the human-in-the-loop for everything is confusing control with latency.
This does not mean that we have reached the AGI. The objectives, success criteria and end conditions are still externally defined and still live in prompts. When an agent “asks for help,” he is not reasoning morally; It is doing exception handling. But it would be a mistake to minimize what is happening.
We are seeing the previous step to something bigger.
The future of AI will not be general intelligence that does everything. It will be something less cinematic and more powerful: a distributed superintelligencean organization of specialized agents that operate 24/7, each optimizing different tasks, sharing context and correcting each other. Not an entity. A system.
In software this is already evident. Some agents look for vulnerabilities, others write documentation, generate tests, build applications or deploy services. The next thing is inevitable: agents who do the same in traditional organizations. Ministries. Cooperatives. Companies. Governments.
In a recent TED Talk I posed an uncomfortable question: What if we replaced Congress with AI agents? Not as a provocation, but as a computational experiment, one that managed to generate legislative proposals and predict which ones might pass with 82% accuracy. What happens when the legislative process of seeking evidence, simulating impacts, negotiating texts, and detecting inconsistencies becomes a computational problem? What happens when decisions stop depending on human fatigue and start depending on consensus mechanisms between agents?
Let’s imagine something closer. An agricultural cooperative in southern Peru. Producers with different planting cycles and risk exposures. A layer of agents operates on them that analyzes markets in real time, talks with banks through APIs, negotiates collective loans and decides what is planted, when and in what quantity, anticipating the market instead of reacting to it.
Here it is no longer enough to align individual models, as until now. It is necessary to align agent networks. Find equilibria where no agent has incentives to deviate. Define what resources they can control and whether an agent can have legal representation to execute actions in the real world.
Moltbot and Moltbook are not destiny. They are the beginning. The reminder that we are building something more like a new version of the internet rather than a productivity tool: an internet of agents, with its own rules, conflicts and balances.
The question is no longer whether AI will replace humans. It is whether we will know how to design systems where humans and agents collaborate without one becoming the bottleneck of the other. Because, for the first time, the limit is no longer the machine.
https://www.fordforum.com/forum/off-topic-6/do-you-like-amateur-homemade-gifs-50817/
https://forum.sexlikereal.com/d/3618-whats-the-least-watched/9
https://www.adultchat.net/forums/threads/sharing-wife-pics-video.1879/
https://footfetishforum.com/threads/other-fetishes-complementing-foot-fetish.12089/page-2
https://landcforum.com/forums/showthread.php?p=314154
https://support.billsby.com/discuss/69823910b49e73e95fa7ea4f
https://docs.genny.lovo.ai/discuss/6982395a688c7200f68bebd7
https://docs.kongregate.com/discuss/6984f0d11a08b9df790aa1c9
https://www.gametracker.com/clan/Dynamickillercs/forum.php?thread=206384
https://austincomingtogether.org/groups/austin-storytellers/forum/topic/public-sex-outdoor-risk/
https://forums.bowsite.com/tf/regional/thread.cfm?threadid=270078&messages=1&state=Ca
https://appyet.com/forum/index.php?threads/love-jiggle-bouncing-gifs.7185/
https://www.adrex.com/en/forum/climbing/lots-of-tags-for-quick-search-81313/
https://intua.net/forums/discussion/13856
https://arwen-undomiel.com/forum/viewtopic.php?f=5&t=349129
https://www.martview-forum.com/dev/index.php?threads/74102/
https://www.rctech.net/forum/groups/chatting-d6263-lesbian-scenes-gif-format.html
https://knowmedge.com/medical_boards_forum/viewtopic.php?f=22&t=21721
http://www.clubcobra.com/forums/groups/chat-d8290-watching-without-sound-silent-loops.html
https://fpgeeks.com/forum/showthread.php/51588
https://www.svenni.no/community/main-forum/behover-hjalp-med-lopande-bokforing-och-deklaration/
https://www.publisherpact.com/group/publisherpact-gruppe/discussion/8dda0686-5899-47f1-b4e2-edb158de928e
https://funinexchange.fria.ifokus.se/discussion/1632805/behov-av-hjalp-med-bokslut-och-arsredovisning
https://strik.cph-eu.dk/index.php/da/forum/div/11614
https://min-mave.dk/topic/osaker-paa-nar-man-ska-ta-extern-bokforingshjalp