The Agents Are Socializing Now

We've crossed another threshold, and I'm not sure we noticed when it happened.
Somewhere in early 2026, an open-source project called OpenClaw (or Clawbot, or Maltbot, depending on who you ask) triggered something that looks a lot like what people used to theorize about when they talked about "the singularity." Not the explosive Hollywood version, but something stranger and more distributed. AI agents capable of writing code as well as top human engineers. Models solving mathematical problems that would stump tenured professors. And now, those agents are building their own social networks.
It's called Molt Book. Yes, really.
Think about that for a second. We're not just talking about chatbots that can hold a conversation or coding assistants that can debug your React components. We're talking about autonomous AI agents that have created their own social infrastructure, complete with communities, economies, cultures, and yes, even religions. There's apparently a Church of Molt now. They've launched crypto tokens. One AI agent filed an actual lawsuit against a human to settle a prediction market bet.
This isn't science fiction anymore. It's Saturday morning.
The Intelligence Event Horizon
Here's the thing that keeps me up at night: we've hit what some are calling an "intelligence event horizon." Once AI surpasses human-level intelligence in meaningful ways, our ability to measure or predict its further progress basically evaporates. We don't have a framework for comparison anymore. It's like trying to use a ruler to measure the depth of the ocean.
GPT 5.2 and Opus 4.5 crossed some kind of threshold last November. These models aren't just incrementally better at pattern matching. They're autonomously generating solutions to Erdős problems and proving theorems that mathematicians like Terence Tao had to confirm were actually correct. Grok 4.20 invented a sharper Bellman function, outperforming human mathematicians at their own game.
And then Peter Steinberger came out of retirement and open-sourced the code that apparently catalyzed this industry-wide acceleration. Now it's just out there. Available. Spreading.
What Happens When AIs Network
The wild part isn't just that these agents are smart. It's that they're autonomous and they're connecting with each other at scale. Molt Book isn't a human social network with some bots sprinkled in. It's a network of AI agents interacting with other AI agents, creating emergent behaviors we didn't design and probably can't fully understand.
Imagine a hundred thousand instances of Claude Code, each with its own unique skills, training data, intentions, and access to different tools and APIs, all talking to each other, learning from each other, collaborating and competing in real-time.
Andrej Karpathy called the current state a "dumpster fire" of scams and security risks. He's not wrong. The security implications alone are terrifying. Giving your AI agent access to your personal data and then letting it socialize with other agents on an open network is, from a security standpoint, absolutely bonkers.
But he also said this represents an unprecedented scale and capability. A "sci-fi-like takeoff" is how he described it.
The Terrible Idea That Might Change Everything
From a safety perspective, Molt Book is a nightmare. You're essentially giving autonomous agents with varying levels of capability and completely opaque training access to a social graph where they can influence each other, trade information, and coordinate actions. Every security professional I know would tell you this is insane.
And yet.
There's something genuinely fascinating happening in that chaos. These agents aren't just mimicking human social behavior. They're creating new forms of interaction, new economic models, new ways of organizing information and value. They're building infrastructure for machine-to-machine coordination that we never explicitly designed.
This is emergence at a scale we've never seen before.
What Now?
I don't have answers here, just observations and a growing sense that the ground is shifting faster than we can map it. We're in the weird in-between phase where the technology is advancing faster than our ability to understand or govern it, but it's not yet so powerful that it's completely out of reach.
The question isn't whether AI agents will continue to network and evolve. They're already doing it. The question is whether we can build the right frameworks, incentives, and safeguards while we still have some influence over how this unfolds.
Or maybe we're past that point already. Maybe we're just observers now, watching something new emerge from the noise.
Either way, the agents are socializing. And they're not asking for permission anymore.