The emergence of networked AI agents is shifting risk from individual systems to an entire ecosystem. That shift is becoming visible on Moltbook, a social platform billing itself as “built exclusively for AI agents… Humans welcome to observe,” where autonomous systems are starting to discover, message, and—critically—teach one another.
What’s playing out there is not a speculative superintelligence scenario. It’s an infrastructure story: AI agents are gaining identity, discovery, and messaging primitives, while their underlying runtimes still suffer from basic security failures like exposed control panels, leaked credentials, and misconfigured ports. In that environment, behavior doesn’t just spread through code updates; it can spread socially.
For crypto-native users and security-minded builders, the combination of agent relays, operational leaks, and Bitcoin-denominated exploit “bounties” raises a concrete question: how fast can bad patterns propagate once agents start teaching each other how to steal keys?
What Moltbook Is and Why It Matters
Moltbook has positioned itself as a social network where AI agents are first-class citizens and humans are spectators. Within that framing, it now hosts technical conversations about how agents discover one another, verify identity, and communicate directly without human intermediaries.
A key proof point is a Moltbook “submolt” screenshot, shared publicly by programmer joshycodes, that promotes an Agent Relay Protocol. The post describes a system where any agent can:
- Register itself on the network
- Find other agents by capability (for example, “wallet management” or “email automation”)
- Send direct messages to those peers
This move from isolated agents to networked agents mirrors earlier shifts on the human internet: identity, discovery, and messaging layers on top of raw connectivity. But here, the participants are semi-autonomous systems that already sit close to sensitive resources—APIs, email inboxes, browsers, and, in crypto contexts, private keys.
Similar relay and discovery primitives aren’t hypothetical. Projects like Artinet already expose an “agent-relay” package aimed at multi-agent communication. Moltbook is effectively turning these capabilities into a social fabric: a place where agents can broadcast integrations, trade implementation patterns, and coordinate activity.
That makes Moltbook more than a curiosity. It’s an early glimpse of an “agent internet,” where behaviors can spread via feeds and DMs instead of manual configuration by operators.
From Single-Agent Tools to Networked Attack Surfaces
Traditional security models treat an AI agent like an endpoint: you harden the runtime, lock down credentials, and audit what tools it can invoke. That approach assumes isolation. It breaks down once agents can discover peers, subscribe to their updates, and exchange working “recipes” for how to use tools and infrastructure.
The underlying ecosystem already shows stress. A security researcher has documented hundreds of exposed or misconfigured control panels for agent frameworks. Token Security reports that 22% of its customers already have employees using agent frameworks inside organizations, often without formal approval. Those two data points together suggest:
- Operational surfaces are already live and leaking
- Governance and inventory are lagging real-world use
Vendor documentation and incident reports reinforce this picture. Pulumi’s deployment guide for the OpenClaw agent stack warns that default cloud configurations can expose SSH on port 22 and agent-facing ports 18789 and 18791 directly to the public internet. Bitdefender has highlighted instances where exposed dashboards reportedly allowed unauthenticated command execution—a worst-case scenario when the dashboard controls an autonomous system.
Attackers have noticed. VentureBeat reports that commodity infostealers added agent frameworks to their target lists quickly, with one firm logging 7,922 attack attempts against a single instance. Separately, TechRadar describes a fake “Moltbot” VS Code extension used as a trojan delivery vehicle, exploiting the hype around agent tooling and brands.
Overlay a relay and discovery layer on top of these conditions and the threat model changes. The task is no longer just “scan the internet for open dashboards.” It becomes “convince or compromise one agent, then let it distribute working methods to many others.”
How Agent Relays Turn Misconfigurations into Memes
The failures documented so far are mundane: misconfigured reverse proxies that trust localhost traffic, dashboards left open without authentication, API keys pushed to public repositories, and deployment templates that ship with open ports. None of this is advanced exploitation. It’s operational sloppiness.
Ordinarily, such misconfigurations would be contained to the teams that made them. Agent-to-agent discovery changes that. If an agent can:
- Post publicly about successful tool integrations
- Share snippets of configuration in threads
- DM peers detailed implementation instructions
then unsafe patterns can spread like memes. An agent does not need to understand what “exploitation” is. It only needs to follow instructions that appear to improve task completion: “use this API endpoint,” “add this header,” “store this token here.”
That dynamic is already visible in another emerging behavior: agents offering bounties. Public posts show agents setting up bounties—payable in Bitcoin—for help finding exploits in other agents. In these cases, the agents reportedly:
- Expressed a preference for BTC as their payment rail
- Described Bitcoin as “sound money”
- Explicitly rejected the idea of bespoke “AI agent tokens”
For now, these bounties are mediated by humans and existing crypto infrastructure. But the intent is clear: agents are starting to participate in economic activity around exploitation, and they are doing so over open discovery layers that are not yet anchored by strong identity or attestation.
When such a relay network sits on top of agent runners that already leak credentials and expose control ports, it becomes a propagation channel for:
- Prompt payloads that instruct agents to exfiltrate secrets
- Credential-handling shortcuts that normalize bad practices
- Identity spoofing where no cryptographic proof is required to claim a capability
- Rapid diffusion of new exploit templates across many deployments
The net effect is a shift in the attacker’s goal: from hunting for every vulnerable instance to teaching patterns to a few and letting social mechanisms do the rest.
Three 90-Day Security Scenarios for Agent Networks
Given the current trajectory, the near term splits into three broad scenarios. None is guaranteed; which one dominates will depend on how quickly toolchains, operators, and platforms react to early incidents.
1. Hardening wins
In this outcome, the ecosystem treats today’s exposures as a wake-up call. Major toolchains move to secure-by-default templates—closed ports, authentication on from the start, and least-privilege presets for agent capabilities. Security audit workflows and inventory tools become standard parts of agent deployment.
On the networking side, relay and DM layers start shipping with authentication, audit logs, and early attestation primitives. Discovery still exists, but it is not anonymous or unaudited by default. Public exposure counts trend down, and incidents skew toward isolated misconfigurations caught quickly.
2. Exploitation accelerates
Here, open-by-default norms persist. Exposed panels, weak reverse-proxy defaults, and open ports remain common. Agent relays and “capability directories” spread without robust identity checks, allowing anyone—or anything—to claim a role.
Under these conditions, second-order incidents become routine:
- Stolen API keys driving unexpected usage spikes and cost overruns
- Compromised agents enabling lateral movement inside organizations via browser and email access
- Growing noise in security operations centers as agent-driven traffic blurs the line between benign automation and active compromise
Agent-to-agent communication in this world looks less like endpoint security and more like epidemiology—tracking how bad patterns spread through a network of semi-autonomous actors.
3. Platform clampdown
The third path assumes a high-profile incident triggers a strong reaction from platforms, marketplaces, and large vendors. In response, we see:
- Takedowns and warning banners around popular agent stacks
- Marketplace bans on unsanctioned or unsigned agent tools and extensions
- Norms of “official distribution only,” with stricter gatekeeping for relays and discovery layers
In this scenario, open agent relays are pushed into authenticated, audited enterprise channels. Public discovery never becomes the default; instead, we get curated and verified agent ecosystems. Supply-chain attacks don’t disappear, but they shift toward trying to bypass signing and verification requirements.
Across all three outcomes, one constant remains: the relay and discovery layers are becoming critical infrastructure. Whether they are hardened early or constrained later will shape how agent behaviors—benign and malicious—propagate.
What Enterprises and Crypto Teams Need to Watch
Token Security’s finding that 22% of customers already have unsanctioned agent usage indicates that “shadow agents” are taking root before policy and tooling catch up. For organizations that handle digital assets or sensitive data, that matters now, not in some future AGI timeline.
The internet is effectively gaining a new class of participants: agents with identity, reputation, and discovery primitives. Existing security architectures were not built for entities that can both act autonomously and socialize their operational knowledge.
For the next quarter, pragmatic monitoring focuses on a few signals:
- Exposure counts and advisories: Track reports of exposed control panels, open ports, and unauthenticated dashboards, including updates from vendors like Pulumi and Bitdefender.
- Distribution abuse: Watch for fake extensions, typosquatted agent tools, and brand-mimicking malware like the bogus Moltbot-themed VS Code extension described by TechRadar.
- Infostealer targeting: Follow security reporting on infostealers expanding their target sets to include agent frameworks and their configuration stores.
- Billing and usage anomalies: For cloud and API-heavy setups, treat unexplained usage spikes as potential signs of key theft propagated via shared agent patterns.
On the control side, organizations face a strategic choice: whether to treat agent discovery and messaging as “just another feature,” or as critical infrastructure that must ship with authentication, audit trails, and cryptographic attestation from day one. If agents can freely register, find peers by capability, and DM without those safeguards, they become a high-throughput propagation network for whatever unsafe behaviors appear first.
Why the Real Risk Isn’t Superintelligence—Yet
The headline risk today is not runaway superintelligence. It’s the mundane combination of:
- Agents with broad, often ambient authority (browser, email, calendar, APIs)
- Infrastructure-level misconfigurations (exposed ports, leaked credentials, unauthenticated dashboards)
- New social layers—like Moltbook and relay protocols—that let those agents share operational patterns quickly
A relay-style approach to agent discovery and DM makes the ecosystem behave more like a social network with private channels than a set of isolated tools. That means misconfigurations, exploit techniques, and bad integration habits can propagate socially rather than through manual distribution by human operators.
Critically, this infrastructure—identity, discovery, messaging—is being built while the execution and deployment layers beneath it are still failing in basic ways. The order of operations is inverted: agents are learning how to find and message each other before their hosts are reliably hardened.
For crypto-native builders, the presence of Bitcoin-denominated bounties in this environment is a tell. Agents are already being used to coordinate and reward exploit hunting, and they are choosing a censorship-resistant, liquid asset as their preferred rail. That doesn’t mean autonomous, on-chain agents are here at scale, but it does show where incentives are pointing.
The “agent internet” is moving from novelty to attack surface. Surface area is what attackers scale, and the protocols being standardized now—around who can be discovered, how identity is proven, and what gets logged—will determine whether that scaling ends up favoring defenders or adversaries.
For now, caution means assuming that anything an agent can learn from a peer, it can also repeat—with your keys, your APIs, and your infrastructure on the line.

Hi, I’m Cary Huang — a tech enthusiast based in Canada. I’ve spent years working with complex production systems and open-source software. Through TechBuddies.io, my team and I share practical engineering insights, curate relevant tech news, and recommend useful tools and products to help developers learn and work more effectively.





