Skip to content
Home » All Posts » Why Shared Memory Is Becoming the Critical Layer in Enterprise AI Orchestration

Why Shared Memory Is Becoming the Critical Layer in Enterprise AI Orchestration

Enterprises experimenting with AI agents are discovering an uncomfortable pattern: pilots work well in isolated demos, but break down when agents have to operate inside real organizations with real constraints. The underlying problem, according to Asana chief product officer Arnab Bose, is not just models or interfaces—it’s the lack of robust shared memory and context.

Speaking at a recent VentureBeat event in San Francisco, Bose argued that shared memory is quickly emerging as the missing layer in AI orchestration. Without it, AI agents remain fragile, one-off integrations. With it, they can begin to function as accountable, transparent teammates inside enterprise workflows.

From tool to teammate: how shared memory changes the AI agent role

Asana’s approach to AI agents starts from a simple but demanding premise: agents should behave like active teammates embedded in projects, not passive add-ons sitting at the edge of the workflow.

This is more than a UX decision. It assumes that the agent has:

  • Access to a detailed history of work already done
  • Immediate visibility into what is still open or unresolved
  • Permissions and governance aligned with how the team already collaborates

Asana’s Asana AI Teammates, launched last year, operationalize this philosophy. Instead of treating AI as an external chatbot or generic assistant, Asana lets organizations plug agents directly into specific teams and projects. From the moment an agent is created, it is given a historical record of relevant tasks and activities.

Bose described the desired experience this way: when someone assigns a task to an AI agent, they shouldn’t have to re-explain how the business works or recap every prior decision. The shared memory layer—Asana’s work graph, combined with integrated context from tools like Microsoft 365 and Google Drive—is intended to give the agent that understanding from the outset.

Technically, Asana has leaned on deep integration with Anthropic’s Claude, including support for the modern context protocol (MCP), so that agents can tap into external systems without building a bespoke integration for every single pairing. Users can choose from a set of 12 pre-built agents—for routine use cases like IT ticket deflection—or define their own agents, then assign them to projects just like they would a human collaborator.

Crucially, when an AI teammate is added, it doesn’t just operate behind the scenes “on behalf of” a user. As Bose explained, it manifests as its own teammate, inheriting the same sharing permissions and access patterns as any other participant. Everything it does is captured alongside human activity to form a single, consistent system of record.

What “shared memory” actually looks like inside an enterprise

Shared memory, in this context, is not a single database or feature. It is the coordinated layer of historical work, decisions, and context that an AI agent can rely on to act coherently over time.

Within Asana’s implementation, this includes:

  • Work history: tasks that have been created, completed, or are still pending in a project
  • Project structure: how work is organized, who owns which responsibilities, and how teams relate
  • Connected documents: references from external systems like Microsoft 365 or Google Drive that enrich the work graph
  • Shared permissions: the same access controls that govern what human teammates can see and do
  • Action logs: a record of what the AI agent has done, side by side with human actions

The shared memory layer is what allows AI agents to act in ways that are both useful and accountable. Without it, agents are repeatedly “cold-started” and forced to operate on narrow prompts. With it, they can respond with awareness of the broader work context, maintain continuity across tasks, and provide traceable justifications for their actions.

Bose emphasized that this is also essential for explainability. By documenting everything the agent does in a human-readable way, the system makes it much easier for teams to understand and audit AI behavior. Instead of treating agent output as an opaque suggestion, organizations can see how and where the agent acted within the same stream of work they already use.

The result is a shift in how AI is positioned. Rather than inserting model calls into scattered endpoints, shared memory enables a more durable, orchestrated layer where multiple tools and agents can contribute to a common body of work.

Guardrails by design: checkpoints, oversight, and transparency

Embedding AI as a teammate raises an obvious question for enterprise leaders: how do you prevent that teammate from going off-track?

Asana’s answer is to build guardrails and human checkpoints directly into workflows. Bose outlined several design principles that underpin this approach:

  • Checkpoints for human review: Workflows are structured so that humans can pause, inspect, and provide feedback on what an agent is doing. Teams can ask an agent to adjust research directions, refine outputs, or correct errors before they propagate.
  • Human-readable interaction history: Feedback, decisions, and corrections are documented in clear, accessible language. This is meant to keep the system aligned with familiar patterns of human collaboration instead of burying interactions in technical logs.
  • Visible behavior instructions: The UI surfaces instructions and knowledge about how agents behave, helping users understand what an agent is designed to do—rather than treating it as a mysterious black box.
  • Admin control over misbehavior: Approved administrators can pause, edit, or redirect models when their actions conflict with expected behavior or appear anomalous. If an agent starts acting “in a weird way,” admins can step in.

One concrete mechanism Bose described is the ability for someone with edit rights to remove conflicting instructions or configuration that may be causing misalignment. By deleting or adjusting these conflicting elements, admins can reset the agent back to the intended behavior path.

For enterprise product and IT leaders, this model has a clear appeal: AI activity is not a separate, opaque process but part of a transparent, governable workflow. Oversight is not an afterthought; it is designed into how agents access and act on shared memory.

Security, authorization, and the complexity of connecting agents

Even with shared memory and strong guardrails, there are open challenges in making AI agents work safely inside complex enterprise environments. Bose highlighted security, accessibility, and compatibility as ongoing areas of concern.

One immediate issue is authorization. To connect Anthropic’s Claude to Asana through MCP and other public APIs, users must go through an OAuth flow and explicitly grant access. On paper, this is straightforward. In practice, it depends on knowledge workers understanding:

  • That such integrations exist in the first place
  • Which OAuth grants are appropriate and safe in their environment
  • Which authorizations could unintentionally expose sensitive data or over-privilege an agent

Scaling that understanding across a large organization is a nontrivial education and governance challenge. Bose suggested that some of the complexity of direct app-to-app OAuth grants could be alleviated by centralizing around identity providers or similar mechanisms.

He pointed to the idea of a centralized listing of approved enterprise AI agents and their skills—“almost like an active directory or universal directory of agents.” Such a directory could clarify which agents are trusted, what they are allowed to do, and what systems they can touch, giving IT teams a more consistent control plane.

Today, though, these patterns are emerging in a fragmented way. Beyond Asana’s own work, there is no widely accepted protocol for defining or managing shared knowledge and memory across tools. As a result, any integration where external agents want to operate on Asana’s work graph is effectively a bespoke exercise.

The missing standard: why shared memory isn’t yet plug-and-play

Bose was explicit that, at present, there is no standard protocol around shared memory and knowledge that spans tools and vendors. This is becoming a bottleneck as more partners ask how their own agents can leverage Asana’s work graph and the shared work it represents.

Without a common protocol, each connection between an external agent and Asana must be handled as a custom, case-by-case integration. That means:

  • Negotiating how the agent will interpret and use Asana’s work graph
  • Aligning on permissions and governance rules
  • Defining how shared memory is read from and written to between systems

Bose characterized these as “very custom bespoke” conversations rather than plug-and-play connectors. For enterprises trying to orchestrate multiple agents across multiple platforms, this bespoke reality increases integration cost, slows experimentation, and complicates risk management.

At the same time, interest is accelerating. Bose noted “a lot of interesting inbound asks” from partners who want their agents to operate on Asana’s shared work context. That demand underscores both the perceived value of a shared memory layer and the friction caused by its current lack of standardization.

Three hard orchestration questions enterprise leaders must answer

Amid this evolving landscape, Bose highlighted three questions he sees as especially “extremely interesting” for AI orchestration inside enterprises. For product leaders, IT decision-makers, and AI platform architects, these questions map directly to near-term strategic decisions:

  1. How do you build, manage, and secure an authoritative list of known, approved AI agents?
    Organizations will need a way to formally register agents, define their capabilities, and keep that list accurate over time. This echoes concepts from identity management but applies them specifically to AI actors.
  2. How can IT teams enable app-to-app integrations without accidentally configuring dangerous or harmful agents?
    As more tools integrate with AI agents via protocols like OAuth and MCP, IT must balance enablement with protection. The design of policies, guardrails, and review processes will determine how safely agents can move between critical systems.
  3. How do we evolve from single-player to multi-player agent interactions?
    Today, most agent setups resemble “single-player” modes: a cloud-based AI service connects independently to Asana, or Figma, or Slack. The harder question is how to coordinate toward unified, “multi-player” outcomes where multiple agents and apps share context and work together on a common goal.

All three questions point back to the same underlying need: a coherent orchestration layer where agents, applications, and shared memory are governed together, not as separate point solutions.

Where MCP fits in—and what it can’t yet solve

One development Bose called promising is the growing adoption of the modern context protocol (MCP), the open standard introduced by Anthropic. MCP is designed to let AI agents connect to external systems in a single action rather than requiring bespoke integrations between every system and every agent.

In principle, this can reduce integration overhead and make it easier for agents to draw on relevant context from multiple tools. For enterprises, a robust, widely adopted protocol like MCP could simplify how agents plug into existing systems of record and work graphs.

Bose was cautiously optimistic about MCP’s potential, noting that its adoption could unlock new and interesting use cases as more systems support it. However, he also stressed that there is unlikely to be a “silver bullet” standard for AI orchestration in the near term.

Protocols like MCP address part of the problem—specifically, how agents technically connect to external systems. They do not, on their own, define a universal model of shared memory, standardized governance rules, or a cross-vendor directory of agents and capabilities. Those higher-level orchestration concerns remain largely open.

For enterprise leaders, the implication is twofold: emerging standards are worth tracking and experimenting with, but internal design choices around shared memory, guardrails, and agent governance will still be decisive in the short to medium term.

What enterprise AI leaders should watch next

Shared memory is quickly moving from a theoretical concept to a practical requirement for enterprises deploying AI agents at scale. As Asana’s example illustrates, turning agents into credible teammates depends on giving them structured access to historical work, well-defined permissions, and transparent oversight mechanisms.

At the same time, security, authorization flows, and the absence of a shared-memory standard make multi-agent orchestration a complex undertaking. Bose’s three orchestration questions—how to maintain an authoritative agent list, safely enable app-to-app integrations, and progress from single-player to multi-player agent interactions—capture the core design challenges facing organizations today.

Open standards like MCP hint at a future where connecting agents to enterprise systems is more uniform and less bespoke. But for now, there is no universal playbook or single protocol that solves orchestration end to end. Enterprises will need to combine emerging standards with their own governance models, tooling choices, and experimentation.

For product leaders, IT teams, and AI architects, the next phase of AI adoption will be defined less by which model they choose, and more by how effectively they build and govern the shared memory layer that those models rely on.

Join the conversation

Your email address will not be published. Required fields are marked *