The U.S. government’s order for all federal agencies to cease using Anthropic’s technology within six months is more than a single-vendor story. It’s a live-fire test of something most enterprises still lack: a clear map of where AI actually sits inside their workflows—and inside their vendors’ workflows.
The directive assumes agencies can quickly locate every place Anthropic’s Claude models are used. The reality, for both government and commercial organizations, is that most can’t. And that visibility gap turns any sudden vendor cutoff into an operational and security crisis.
For CISOs and security leaders, the Pentagon–Anthropic situation is a concrete case study in AI supply chain risk—and a preview of what a forced AI migration could look like inside your own environment.
The Pentagon directive as a stress test

The federal directive compelling agencies to unwind their use of Anthropic over a six-month window is unprecedented in the context of AI providers. It doesn’t just affect direct contracts; it also covers indirect exposure through other vendors whose products quietly rely on Claude behind the scenes.
That scope is exactly what makes this episode such a revealing stress test. It forces a massive, complex organization—the U.S. government—to answer questions most enterprises haven’t yet asked in detail:
- Where are AI models called inside key workflows?
- Which external SaaS and platforms embed those models?
- How do you prove, on demand, that a particular provider is no longer anywhere in your stack?
Anthropic itself has said that eight of the 10 largest U.S. companies use Claude. Any contractor, supplier, or service provider in their ecosystems may inherit Anthropic exposure indirectly. For companies with Pentagon business, the new supply chain risk designation means they must now demonstrate their workflows do not touch Anthropic at all—even if the relationship is mediated through a CRM, analytics engine, or customer service tool.
That requirement surfaces a basic operational question security leaders should be asking internally: if one of your critical AI vendors were suddenly designated off-limits, how quickly could you identify, quarantine, and replace that dependency?
The visibility gap: what CISOs don’t see
The emerging data suggests that, for most organizations, the answer is “not fast.” A January 2026 Panorays survey of 200 U.S. CISOs found only 15% reported having full visibility into their software supply chains—an improvement from 3% the previous year, but still a stark minority. That gap is where undocumented AI vendor dependencies accumulate.
At the same time, frontline adoption is racing ahead of governance. A BlackFog survey of 2,000 workers in large companies reported that 49% had adopted AI tools without employer approval. Perhaps more concerning, 69% of C-suite respondents said they were comfortable with this behavior, effectively sanctioning “shadow AI” without corresponding controls.
These patterns mirror the early days of SaaS, when shadow IT proliferated ahead of central oversight. But there is an important difference: AI usage is often deeply embedded in existing workflows, not exposed as a standalone application or distinct login event. That makes it harder for traditional security tooling and processes to detect.
Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, summed up the challenge in an interview: most security programs were designed for static assets, while AI is “dynamic, compositional, and increasingly indirect.” CISOs may believe they have approved a defined set of AI tools, but the actual landscape of models, sub-processors, and chained calls is far more complex and opaque.
When a vendor relationship ends overnight

For organizations that depend heavily on a single AI provider, the Pentagon–Anthropic directive is a warning about concentration risk. If a critical vendor disappears—whether for regulatory, contractual, operational, or geopolitical reasons—the impact ripples across every workflow that touches its models.
Shadow AI amplifies that risk. IBM’s 2025 Cost of a Data Breach Report found that shadow AI incidents now account for 20% of breaches, increasing the average breach cost by as much as $670,000. Unapproved or poorly understood AI usage doesn’t just complicate compliance; it materially increases exposure when things go wrong.
In the Anthropic case, a federal agency might have no direct contract, yet still be affected because a SaaS product they use calls Claude on every ticket or transaction. That indirect dependency only becomes visible when something breaks or when a compliance notification arrives—by which point the organization is reacting under pressure.
Baer emphasized that “models are not interchangeable.” Swapping vendors alters far more than an endpoint URL. Output formats, latency, safety filters, and even hallucination profiles can differ significantly. Migrating isn’t a simple API switch; it requires revalidating controls as well as functionality.
She described a typical disruption sequence: initial triage and blast radius assessment, followed by analysis of behavioral drift under the new model, and finally a wave of credential rotation and integration changes. Rotating keys is relatively simple; unwinding hardcoded dependencies, vendor SDK assumptions, and elaborate agent workflows is where migrations tend to fail or drag on.
Why AI dependencies are harder to detect than SaaS
Security teams have experience dealing with unsanctioned SaaS. Over the past decade, many organizations deployed cloud access security brokers (CASBs), tightened SSO, and correlated spending patterns to discover new applications in use. Those approaches worked because SaaS left visible traces: new domains, distinct logins, new data stores, and line items in billing systems.
AI services often do not surface in the same way. Dependencies can sit several layers down inside vendor products, invoked at runtime rather than installed or provisioned like traditional applications. A senior defense official reportedly described the effort to unwind Claude from Defense Department workflows as an “enormous pain in the ass.” If that is the view from one of the best-resourced security operations in the world, most enterprises should assume their own disentanglement would be slower and more chaotic.
Baer drew a clear contrast: shadow IT with SaaS was “visible at the edges,” she said, while AI dependencies are “embedded inside other vendors’ features, invoked dynamically rather than persistently installed, non-deterministic in behavior, and opaque.” In many cases, customers do not even know which model or provider is being used under the hood.
For CISOs, this means that traditional inventory approaches—based on vendor lists, contract repositories, or high-level architecture diagrams—are no longer enough. The most consequential dependencies may live inside features and microservices that appear benign in standard asset inventories.
Four concrete moves CISOs can make in 30 days
The Anthropic directive did not create the AI supply chain visibility problem; it simply forced one organization to confront it on a fixed timeline. Baer argues that waiting for a similar event to hit your environment is the wrong strategy. Instead of a vague mandate to “inventory AI,” she recommends four specific, time-bound actions that a security leader can initiate immediately.
-
Map execution paths, not just vendors. Rather than starting from contract lists, instrument at the gateway, proxy, or application layer to log which services are making model calls, to which endpoints, and with what data classifications. The goal is to build a live map of AI usage across your environment, including indirect calls, not just a static list of suppliers.
-
Identify control points you actually own. If your only leverage is at the external vendor boundary, you are already constrained. Establish and reinforce controls at three layers: ingress (which data is allowed into models), egress (what outputs may flow downstream, to whom, and under what policies), and orchestration (where agents, pipelines, and higher-level AI workflows are coordinated). These are the points where you can enforce policy even as providers, models, or hosting arrangements change.
-
Run a “kill test” on your top AI dependency. In a non-production environment, simulate the immediate loss of your most critical AI vendor by disabling its API credentials. Monitor the system for 48 hours and document what breaks outright, what degrades silently, and which failure modes fall outside existing incident response playbooks. This exercise is designed to surface undocumented dependencies and brittle integration points before a real cutoff forces you to discover them in production.
-
Force vendor disclosure on sub-processors and models. Require your AI vendors to disclose which models they rely on, where those models are hosted, and what fallback paths exist if a provider becomes unavailable. If a partner cannot answer these questions, you have discovered a fourth-party blind spot. These conversations are easier while the relationship is stable; once a cutoff hits, leverage evaporates and answers, if they come, may be too late to inform an orderly response.
All four moves can be framed as 30-day initiatives—small enough to start now, but substantial enough to materially improve your understanding of AI risk and your resilience to vendor disruption.
Beyond the illusion of control
Baer characterizes much of today’s AI governance as a “control illusion.” Enterprises take comfort in having approved certain AI vendors, but what they have truly approved is an interface—an API, a dashboard, a product name—rather than the deeper, shifting system of models and sub-processors behind it. Under stress, those deeper dependencies are where things fail.
The Pentagon–Anthropic directive is one organization’s weather event. Others will face their own, triggered by different forces: new regulations, contractual disputes, service outages, or geopolitical shifts. The common thread is that each event will test how accurately an organization understands its AI supply chain and how quickly it can re-route critical workflows.
For security leaders, the takeaway is straightforward but urgent. Map AI vendor dependencies down to sub-tier providers. Build execution-path visibility rather than relying solely on contract inventories. Run the kill test before someone else runs it for you. And push vendors for clear disclosure while you still have negotiating leverage.
The next forced migration may not come with six months’ warning. The work you do now to surface hidden AI dependencies will determine whether your organization absorbs that shock as a contained incident—or scrambles for control in the middle of a crisis.

Hi, I’m Cary Huang — a tech enthusiast based in Canada. I’ve spent years working with complex production systems and open-source software. Through TechBuddies.io, my team and I share practical engineering insights, curate relevant tech news, and recommend useful tools and products to help developers learn and work more effectively.





