Skip to content
Home » All Posts » Why CIOs Need to Champion AI Experimentation Inside the Enterprise

Why CIOs Need to Champion AI Experimentation Inside the Enterprise

AI is arriving in enterprises with a mix of urgency, anxiety and hype. Boards want to “do something with AI,” vendors are promising transformational gains and regulators are sharpening their focus. In this environment, CIOs can feel pressure to respond with a perfectly architected AI roadmap and airtight governance framework before allowing any experimentation.

The experience shared by Workday CIO Rani Johnson suggests that this instinct, while understandable, is risky. The greater danger is not getting every AI decision right on day one, but waiting for certainty while the rest of the world learns by doing. Real impact has started not with grand, flawless strategies, but with access, trust and structured, hands-on learning.

For technology leaders, that means shifting from being primarily AI governors to becoming active champions of experimentation — setting direction, creating safe spaces to try, learn and adjust, and ensuring that lessons from early pilots shape broader strategy.

The new mandate for CIOs: from gatekeepers to AI experimentation leaders

Image 1

Historically, IT functions operated as centralized control centers. They chose systems, managed access and enforced standards. That model had already been challenged by SaaS, which put powerful, business-ready tools directly into the hands of users and line-of-business leaders. AI represents an even more profound shift in this trajectory.

According to Johnson, the pattern is familiar: each major technology wave — from early expert systems to online commerce to SaaS — has been met with resistance and skepticism. She recalls building an early expert system to help people choose outfits and hearing that “people would never buy clothing online.” In hindsight, those objections were shortsighted, but at the time they reflected a defensive posture toward unfamiliar technology.

In her subsequent CIO roles, particularly in state and local government, she saw how habitual caution translated into concrete costs: missed learning, stalled innovation and a culture that struggled to look ahead. Waiting until technologies felt safe and fully proven often meant waiting until the advantage had shifted elsewhere.

Applied to AI, the implication is clear. CIOs who limit their role to risk management and policy enforcement will struggle to keep pace with organizations that are actively learning what AI can and cannot do in their specific context. Guardrails are necessary, but they are not sufficient. IT leaders need to demystify AI, make it accessible and create conditions where employees can experiment responsibly.

This reframes the CIO mandate in three ways:

From blocking to enabling: Instead of reflexively slowing new tools, CIOs identify where controlled AI access can safely accelerate discovery and value.

From static strategy to iterative navigation: Rather than assembling a multi-year AI blueprint in isolation, they treat strategy as something informed and refined through real-world experiments.

From “IT’s AI” to “the enterprise’s AI”: They ensure that AI exploration is distributed — across functions and roles — while still anchored in common principles for security, ethics and reliability.

Inside Workday’s approach: start small, accessible and iterative

Workday’s internal AI journey, as described by Johnson, illustrates a practical version of this leadership stance. The company did not begin with a fully baked, end-to-end AI transformation plan. Instead, it took a deliberate, iterative path aimed at quickly getting AI into the flow of work.

Workday focused first on awareness and accessibility rather than custom-built, high-stakes AI projects. The team rolled out AI capabilities that were already available within tools employees were using every day. The intent was straightforward: make AI feel intuitive and helpful, not exotic or risky.

This approach served multiple purposes grounded in day-to-day reality:

• It lowered the barrier to entry. Employees did not have to adopt a new platform or drastically change their workflow to encounter AI; they discovered it where they already worked.

• It generated organic use cases. Once people had access, they began to find their own ways to incorporate AI into tasks, bringing forward examples that leadership might not have predicted.

• It demystified the technology. Seeing AI assist with practical tasks made it less abstract and more concrete, easing some of the fear and doubt that often accompany new tools.

The lesson for other CIOs is not that they must follow Workday’s exact tool choices, but that they can create early momentum by embedding AI into familiar environments, then listening carefully to how employees actually use it. Strategy, in this model, is informed as much by bottom-up discovery as by top-down planning.

Why trust and access matter more than polished AI roadmaps

Image 2

Simply turning features on, however, is not enough. Workday found that employees needed help understanding what AI could do for them and how to use it effectively. To bridge this gap, the company launched an AI Champions initiative.

These champions were hand-selected from different teams and focused on sharing persona-based use cases — examples that reflected the specific work, pain points and goals of their peers. Instead of generic demos, they showcased how AI was actually improving workflows in their own domains.

This peer-to-peer model proved powerful for building trust:

• It made AI adoption feel less like a top-down mandate and more like a shared opportunity discovered within teams.

• It surfaced real-world patterns and pitfalls more quickly, because champions could see where colleagues struggled or found value.

• It created a distributed support network, so employees had someone nearby to ask for guidance.

As Workday progressed from general AI capabilities to what Johnson calls “functional AI” — more complex applications tailored to specific business areas — the importance of this trust became even clearer. Deeper integrations inevitably bring more nuance and risk, making it essential that teams feel safe to surface issues and learn from early missteps.

For CIOs, the broader takeaway is that trust is not a by-product of perfect technology; it is built through transparent experimentation. Designated champions, clear communication about what AI can and cannot do and real examples from peers all help move the conversation from fear and hype to grounded understanding.

Rethinking ROI: learning, speed and small bets over perfect business cases

Leading AI experimentation also requires rethinking how to evaluate AI investments. Workday created an AI Advisory Council bringing together leaders to guide AI-related decisions. In the process, they realized that traditional, rigid ROI criteria were poorly suited to such a fast-evolving technology.

Many early AI initiatives will not deliver immediate, easily quantifiable financial returns. Some may remain exploratory or get redirected as organizations learn what works. Yet, as Johnson notes, these efforts can still create substantial value in other ways:

Organizational learning: Experiments reveal where AI is well-suited, where it is not and what prerequisites (data quality, process changes, skills) are needed for success.

Speed and responsiveness: Small teams can often build useful tools in weeks, not months, shortening feedback loops and surfacing opportunities faster.

Discovery of new possibilities: Pilots can uncover use cases leadership had not originally considered, reshaping priorities for future investment.

Johnson points to an example where a small team with minimal resources built a tool to support earnings reports in just weeks. Beyond the specific utility of that tool, the project demonstrated how quickly a motivated group could create something valuable, informing how future work might be structured and resourced.

In this mindset, small mistakes are not failures but inputs. Contained experiments, even when they do not scale, provide information that helps avoid larger, more expensive missteps later. Conversely, waiting until AI technologies are fully mature in the market can mean missing the period when experimentation yields the most insight and competitive differentiation.

CIOs can support this shift by adjusting evaluation mechanisms. Advisory councils, stage-gated funding for pilots and criteria that explicitly value learning and iteration — not just short-term financial return — can all signal that experimentation is both expected and supported.

Designing a culture where everyone can experiment with AI

Image 3

Underpinning all of this is culture. Johnson argues that the key to successful AI adoption is fostering an environment where learning and experimentation are normalized across the organization, not limited to data scientists or specialized teams.

In practice, that means ensuring employees at all levels — developers and non-developers, executives and individual contributors — have meaningful opportunities to work directly with AI tools. Some organizations are asking employees to train AI models or to practice prompt engineering, which helps demystify how AI systems behave. Workday is encouraging teams to write prompts and train chatbots so that AI becomes a “copilot” in their daily tasks, not a distant, opaque system.

The analogy Johnson uses is athletic training: consistent practice leads to better performance. The goal is for employees to feel that AI makes their work faster, better and ultimately more meaningful and satisfying — not merely more automated or monitored.

There is also a personal, human dimension. Johnson points to her mother’s relationship with a voice assistant as a reminder of how seamlessly technology can integrate into daily life when it is genuinely helpful. This kind of familiarity and comfort is exactly what enterprises should aim for with AI tools: unobtrusive, supportive assistance that becomes part of how work gets done.

For CIOs, building this culture involves:

Providing structured practice spaces: Sandboxes, internal hack days, or guided exercises where employees can safely learn how AI behaves.

Normalizing iteration: Making it clear that rough first attempts are expected and that refinement over time is part of the process.

Recognizing and sharing stories: Highlighting concrete examples where AI helped a person or team do better work, reinforcing the narrative that AI is a partner rather than a threat.

Practical steps for CIOs ready to lead AI experimentation

For CIOs and technology leaders looking to move from intention to action, Johnson’s experience points to a set of pragmatic, sequence-friendly moves:

1. Start with awareness, not perfection. Communicate clearly that AI is a strategic priority and that the organization will be learning in public. Emphasize that waiting for a flawless plan is not an option in a fast-moving environment.

2. Enable safe, accessible experimentation. Turn on appropriate AI capabilities within existing tools where the risk profile is manageable. Pair this with clear usage guidelines and basic training so employees are not experimenting blindly.

3. Identify and empower AI champions. Select motivated individuals across functions to serve as early adopters, coaches and storytellers. Give them time and support to gather use cases, answer questions and share practical tips.

4. Formalize learning loops. Use mechanisms such as an AI Advisory Council or structured review forums to capture what pilots are revealing — about data, processes, governance and skills — and feed those lessons into evolving policies and investments.

5. Redefine how you measure progress. In addition to traditional ROI, track indicators like number of pilots run, time from idea to prototype, employee confidence using AI tools and cross-functional participation.

6. Model the behavior at the top. CIOs and senior leaders can use AI in their own workflows and share how they are experimenting. Visible leadership use helps legitimize exploration and reduces stigma around asking basic questions.

None of these steps eliminate the need for governance. Security, privacy, compliance and ethical use remain critical responsibilities. But by placing experimentation and learning at the center, CIOs can fulfill a dual role: protecting the organization while ensuring it does not fall behind in understanding a transformative technology.

Conclusion: the future of work needs CIOs to move first

Enterprises have seen this story unfold with previous technology waves: early doubts about online shopping, skepticism toward SaaS, and concerns that new models would be too risky or unfamiliar. In many cases, the cost of caution only became clear years later, when more adaptable competitors had already embedded those technologies into their operations and cultures.

AI is following a similar path, but on a compressed timeline. Waiting for a mature, risk-free AI landscape is unlikely to be a viable strategy. Johnson’s message to fellow CIOs is straightforward: do not let fear or the pursuit of perfection be the reason your organization sits on the sidelines.

By building awareness, making AI tools accessible, empowering internal champions, redefining investment criteria and nurturing a culture of experimentation, CIOs can help their organizations develop the practical AI fluency they will need for the future of work. The opportunity — and the responsibility — is to lead from the front, not only by governing AI, but by actively shaping how people learn to work with it.

For enterprise technology leaders, that is the new mandate: become champions of AI experimentation, so that your organization can discover, safely and deliberately, what this technology is really capable of in your context.

Join the conversation

Your email address will not be published. Required fields are marked *