Multi-agent AI systems can now pass messages, invoke tools, and hand work off between specialized components. Yet in many deployments, those agents still fail to truly work together. They execute tasks in sequence, but they don’t share a common understanding of why they are acting or what the overarching goal is.
Cisco’s Outshift group argues that this is the core blocker for scaling agent-based architectures in enterprises. Existing protocols such as MCP, A2A and Outshift’s own AGNTCY framework enable connectivity between agents, but largely at the level of syntax and identification. What’s missing, Outshift says, is a shared semantic layer: explicit intent, context, and reasoning that can be understood and reused across agents and over time.
To address this, Outshift is advancing a conceptual architecture it calls the “Internet of Cognition” — a way to turn loosely connected agents into what it describes as semantically collaborating systems. While still under development, the proposal is positioned as a call for industry-wide standards and shared infrastructure for multi-agent cognition.
The Limitations of Current Agent Communication
Modern multi-agent setups are built on message-passing protocols. Outshift highlights frameworks like the Model Context Protocol (MCP), various agent-to-agent (A2A) schemes, and its own AGNTCY project, which it has donated to the Linux Foundation. These allow agents to:
• Discover and identify tools and capabilities
• Exchange structured messages
• Chain together workflows where one agent’s output becomes another’s input
Vijoy Pandey, general manager and senior vice president of Outshift, summarized the current state to VentureBeat: “We can send messages, but agents do not understand each other, so there is no grounding, negotiation or coordination or common intent.”
In practice, that means protocols handle the “connectivity and identification layer” — essentially the plumbing and syntax of communication. An agent can say, “Here is a diagnosis code,” or, “Here is a resource handle,” and another agent can receive and act on it. But the semantic content of why that data matters, how it was derived, and what goal it serves is not explicitly modeled or shared.
According to Outshift, this keeps agents “semantically isolated.” Each one may be powerful on its own, but they interpret goals independently. Coordination then relies on repeated clarification, and any insight gained by one agent rarely propagates as reusable knowledge to the rest of the system.
A Concrete Example: Healthcare Agents That Don’t Really Collaborate
To illustrate the gap, Outshift points to a multi-agent workflow in healthcare — a common enterprise context where multiple systems touch the same user journey.
Consider a patient trying to schedule a specialist appointment. A typical multi-agent design might include:
• A symptom assessment agent that evaluates reported symptoms and produces a diagnosis code.
• A scheduling agent that uses that code to find available appointments with appropriate specialists.
• An insurance agent that checks coverage and benefits for the visit.
• A pharmacy agent that verifies medication availability based on the treatment plan.
All of this can work with today’s message-passing protocols. The symptom agent hands a code to the scheduling agent; the insurance and pharmacy agents receive the subset of information they are designed to process. At a mechanical level, the system is “connected.”
But Outshift notes that the agents are not truly reasoning together about the patient’s situation. For example:
• The pharmacy agent might recommend a drug that conflicts with the patient’s history. The symptom agent actually has relevant historical data, but “potential drug interactions” were not in its explicit scope, so that knowledge never enters the pharmacy agent’s context.
• The scheduling agent might book the nearest available appointment, unaware that the insurance agent has discovered better coverage at another facility that might offer more appropriate or affordable care.
Each agent completes its task locally, based on its own narrow interpretation of the goal. Yet the overarching objective — “find the right care for this patient’s specific situation” — is never explicitly shared, aligned, or negotiated among the agents. As a result, the system can produce suboptimal or even unsafe outcomes, despite appearing integrated on the surface.
Why Message-Passing Alone Isn’t Enough
From an infrastructure perspective, the limitation is that existing protocols predominantly model data exchange, not shared cognition. They define how to send and receive messages, but not how to represent and propagate:
• The intent behind an action (“I’m trying to optimize for long-term patient outcomes, not just earliest appointment”).
• The reasoning path that led to a decision (“I discounted facility X because of the patient’s prior complications there”).
• The relationship between local tasks and a global goal state.
Outshift’s analysis is that, without mechanisms for shared intent and context, organizations pay a coordination tax. Agents continually re-interpret partial information, ask for clarification, or redo work. More importantly, “nothing compounds,” as Outshift puts it in its paper: one agent’s hard-won insight doesn’t become part of a durable, shared understanding that others can leverage.
This has direct implications for enterprise architects. Systems that look modular and scalable on paper can become brittle and inefficient in production, because every new agent adds more “edges” of integration without a unifying semantic substrate. The result: more logs, more orchestration logic, and more ad-hoc glue code to keep behavior aligned with business goals.
Outshift’s Internet of Cognition: Framing the Missing Layer
To move from communication to collaboration, Outshift argues that agent environments must support three kinds of shared cognitive capability:
• Pattern recognition across datasets — so agents can recognize and share recurring structures or signals beyond their individual scopes.
• Causal relationships between actions — so the system can understand not just what happened, but why it matters for downstream decisions.
• Explicit goal states — so agents can align around the same outcomes and reason about tradeoffs collaboratively.
Outshift bundles these ideas under the umbrella of an “Internet of Cognition”: not a single product, but an architecture in which multiple agents work within a shared semantic system. In this framing, MCP, A2A, and AGNTCY remain essential for connectivity, but are complemented by new layers that express and maintain common understanding.
Outshift stresses that this is a call to action rather than a finished standard. The group is actively writing code, specifying protocols, and publishing research around the concept, and expects to demonstrate the protocols in action. But, as with early internet infrastructure, they emphasize that real impact will require industry-wide alignment on open, interoperable approaches.
The Three Layers: Cognition State Protocols, Fabric, and Engines
The Internet of Cognition proposal introduces three architectural layers, each addressing a specific gap above raw message passing.
1. Cognition State Protocols
This is described as a semantic layer that sits on top of existing messaging protocols. Instead of an agent merely sending a result (for example, a diagnosis code), it would also share:
• The intent behind its current action (what it is trying to accomplish and why).
• Relevant aspects of its internal state that other agents may need to interpret that result correctly.
By making intent and context first-class citizens of the protocol, agents could align on goals prior to acting, rather than attempting to reconcile conflicting interpretations after the fact.
2. Cognition Fabric
The Cognition Fabric is framed as infrastructure for building and maintaining shared context — effectively, a distributed working memory. Outshift describes it as:
• A persistent set of context graphs that span multiple agent interactions.
• Governed by policies that determine what information is shared, with whom, and under what constraints.
Rather than each agent operating from its own ephemeral local context, the fabric lets system designers define what “common understanding” looks like for a particular domain or use case. For instance, in the healthcare scenario, this might mean that patient risk factors and prior adverse events are always available to any clinical decision-making agent, subject to privacy and regulatory policies.
3. Cognition Engines
Cognition Engines provide higher-level capabilities on top of the Cognition Fabric. Outshift describes two main functions:
• Accelerators that allow agents to pool insights and compound learning. When one agent discovers a useful pattern or relationship, that discovery is made accessible to others facing related problems, instead of staying siloed in local logs or weights.
• Guardrails that enforce compliance and policy boundaries. As reasoning is shared more broadly, Cognition Engines are responsible for ensuring that collaborative cognition doesn’t violate regulatory, privacy, or organizational constraints.
The Engines layer, in other words, tries to balance the power of shared cognition with the realities of enterprise governance.
Semantic Collaboration vs. Simple Connectivity
Outshift’s framing emphasizes a distinction between systems that are merely connected and those that “semantically collaborate.” In its paper, the group writes that, without shared intent and shared context, agents remain “capable individually, but goals get interpreted differently; coordination burns cycles, and nothing compounds.”
In a semantically collaborative setup, by contrast, the aim is for:
• Agents to have access to a shared representation of objectives and constraints.
• Reasoning steps and causal relationships to be expressed in a way that other agents (and humans) can consume and build on.
• Knowledge gained in one part of the system to become available, within appropriate boundaries, to the rest of the multi-agent and human organization.
Noah Goodman, co-founder of frontier AI company Humans& and a professor of computer science at Stanford, captured a related dynamic at VentureBeat’s AI Impact event. He noted that innovation in human systems often emerges when “other humans figure out which humans to pay attention to.” Outshift suggests that a similar principle should apply to agents: their collective value multiplies when they can identify and leverage each other’s expertise and insights, rather than repeatedly starting from scratch.
Implications for Enterprise Multi-Agent Architectures
For AI infrastructure engineers and enterprise architects, Outshift’s proposal raises several practical considerations, even before concrete standards are finalized.
First, it reframes integration. Instead of thinking only in terms of API calls and message formats, system design must account for how intent, context, and reasoning are modeled and shared. Questions such as “What is the global objective?” and “Which parts of an agent’s state should be made visible to others?” become design-time concerns.
Second, it highlights the role of persistent context. Many current systems rely on transient prompts and local memory. Building something like a Cognition Fabric implies investing in durable, policy-aware representations of shared state — likely with new storage, indexing, and access control patterns tailored to multi-agent cognition.
Third, it underscores the importance of governance. As more reasoning and internal state are exposed across agents, organizations must define guardrails to control information flow, maintain compliance, and prevent misuse. Outshift’s Cognition Engines concept is explicitly aimed at balancing shared learning with regulatory and policy constraints.
Finally, it suggests that measuring success will evolve. Instead of focusing solely on per-agent performance metrics, teams may need to evaluate how well the overall agentic system aligns with business goals: Are global objectives consistently realized? Are learnings from one workflow improving others? Is coordination effort going down over time?
Where Things Stand and What Comes Next
Outshift is clear that the Internet of Cognition is not yet a mature ecosystem. The group is in the process of writing code, developing specifications, and publishing research related to the architecture. It expects to showcase a demo of the new protocols, but stresses that meaningful progress will require collaboration across vendors and open communities.
The decision to donate AGNTCY to the Linux Foundation fits into this narrative: much like earlier internet protocols needed broad buy-in to become de facto standards, Outshift argues that “open, interoperable, enterprise-grade agentic systems that semantically collaborate” will only materialize through shared, open infrastructure.
For teams deploying multi-agent systems today, the underlying question is straightforward: are your agents simply connected, or are they genuinely working toward the same goal? Until intent and context become first-class parts of the architecture, many systems may remain in the former category — functional, but far from the collaborative potential that Outshift’s Internet of Cognition envisions.
How quickly that vision turns into standardized practice will depend on whether the broader AI infrastructure community chooses to treat shared cognition as a core layer, not just an application-level concern.

Hi, I’m Cary Huang — a tech enthusiast based in Canada. I’ve spent years working with complex production systems and open-source software. Through TechBuddies.io, my team and I share practical engineering insights, curate relevant tech news, and recommend useful tools and products to help developers learn and work more effectively.





