CMS Critic Logo
  • Articles
  • Products
  • Critics
  • Programs
Login Person Icon

Agentic AI Didn’t Fail CMS Teams. Our Expectations Did.

Home
Articles
Products
Likes

Agentic AI Didn’t Fail CMS Teams. Our Expectations Did.

Nabil Orfali headshot
Nabil Orfali
9 mins
An android standing under the glowing letters "CMS" against a landscape at dusk.

Agentic AI is already delivering real value in CMS work. Why are we operating like it isn't?

 

Nabil Orfali is the founder and CEO of TechGuilds and Kajoo.ai and a CMS Critic contributor.


 

I walked into CMS Kickoff 2026 expecting to talk about the future. 

Instead, both roundtable sessions I led ended up being about the present—and why it feels so uncomfortable. 

We weren’t debating whether agentic AI works anymore. That question is mostly settled. What we were really circling around was something harder to admit: 

Agentic AI is already delivering real value in CMS work, and we’re still operating like it isn’t. 

Across two sessions, with people who actually ship things for a living, a pattern emerged. Not optimism. Not fear. Something more pragmatic. 

A quiet acknowledgment that the technology moved faster than our operating models—and now there’s a gap. 

This “gap” matches what the macro research is now showing. McKinsey’s 2025 global survey reports 88% of respondents say their organizations use AI in at least one business function, up from 78% a year earlier—yet most organizations still report they haven’t fully scaled it across the enterprise. That “use vs. scale” delta is exactly what CMS delivery teams are feeling: experimentation is easy; operationalization is hard. 

Stanford’s AI Index similarly describes business usage accelerating and investment continuing to rise, but with uneven ability to translate usage into a durable operating advantage.

The Work Is Already Being Done 

At some point during the first session, someone casually mentioned that their team was seeing over 70% acceleration on parts of CMS delivery. Another mentioned that entire categories of work—SEO metadata, image handling, accessibility checks, localization—were already largely automated. 

No one reacted. 

That’s when it hit me. 

If this were 2023, those statements would have stopped the room. In 2026, they barely registered. Not because they weren’t impressive—but because they were familiar. 

Agentic AI isn’t theoretical anymore. It’s doing the work. Quietly. Reliably. Often better than expected. 

And yet, no one in either session claimed they were running fully autonomous CMS delivery. 

Not because the agents failed—but because we don’t trust them enough to let go. 

Backing this up with measurable adoption and productivity signals: 

In real workplace data, generative AI has moved from novelty to routine. NBER reporting on workplace adoption found 28% of employed respondents used generative AI at work, with 10.6% using it every workday (in the measured period).

When AI is applied to repeatable knowledge-work tasks, controlled studies show consistent productivity lift. A widely-cited NBER field experiment in customer support found a ~14% productivity increase overall, with much larger gains for less experienced workers.

In software development specifically (relevant because CMS delivery is often engineering-heavy), Microsoft Research’s controlled experiment found developers using GitHub Copilot completed a task 55.8% faster.

Why this matters to CMS work: the roundtable outcomes were directionally consistent with these results: AI can compress the “execution time” of repeatable tasks dramatically, while the remaining bottleneck shifts to review, governance, and decision-making—the parts of delivery we historically underpriced or treated as overhead. 

Autonomy Isn’t the Goal. Confidence Is. 

One of the biggest misconceptions I still see is that “agentic” means autonomous in the absolute sense. 

Push a button. Walk away. Come back later to perfection. 

That fantasy didn’t survive contact with real CMS work. 

What did survive were more grounded patterns: 

  • Agents owning clearly defined tasks 
  • Humans approving approaches, not outcomes 
  • Sampling instead of exhaustive review 
  • Reports instead of blind faith 

In other words, the same way we already manage junior team members. 

Someone made an offhand comment that stuck with me: “We already trust interns with production changes—we just pretend we don’t.” 

Exactly. 

Agentic AI doesn’t need blind trust. It needs earned trust, built through visibility, constraints, and feedback loops. 

That’s not a technology problem. It’s an organizational one. 

The numbers behind “trust” and why verification becomes the new work: 

A recent developer survey summarized by TechRadar reported 96% of developers don’t fully trust AI-generated code, and only 48% say they always check it before committing. This is a textbook recipe for “it looked right” failures—exactly what CMS teams fear when AI is allowed to change production content, templates, personalization rules, or SEO fields at scale. 

In enterprise workflow terms, verification isn’t hypothetical overhead. A Zapier survey (reported by ITPro) found that employees spend ~4.5 hours per week correcting poor AI outputs, and 75% reported negative consequences from AI errors. 

Translation to CMS delivery: 
This is why the roundtable consensus emphasized guardrails like prompt templates, change reports, phased rollouts, and level-based verification (especially for higher-risk content domains like healthcare and legal). What teams want is not autonomy—they want confidence at speed.  

The Orchestration Shift Most People Miss 

The most technical, and most important, insight came up when we started talking about migrations at scale. 

Twenty thousand pages. Fifty thousand content items. 

If your instinct is to point a language model at that problem and let it churn through items one by one, you’re already in trouble. 

The smarter teams aren’t asking agents to do the work. 

They’re asking agents to design how the work gets done. 

Generate the migration scripts. 
Define the mappings. 
Set the rules. 
Hand execution to observable systems. 
Review structured output. 

That shift—from execution to orchestration—is subtle, but it’s where agentic AI stops being expensive and starts being scalable. 

And it’s where a lot of hype-driven implementations quietly fall apart. 

Why this is economically and operationally rational (with numbers): 

LLM usage at scale has two costs: compute/token cost and risk cost (errors propagated across thousands of assets). The roundtable conclusion—“agent writes the migration script; observable automation runs it”—is how you avoid both the token burn and the untraceable failure modes. 

This approach aligns with what we’re seeing in broader enterprise patterns: organizations are pushing AI “upstream” into planning, generation, and decision support rather than letting it “loop” through massive execution blindly. Deloitte’s enterprise GenAI research similarly emphasizes that scaling requires process redesign and operating-model shifts—not just model access.

How to make this real in CMS programs (the practical layer CMS Critic readers care about): 

This is where sampling and evaluation become the safety valve. The roundtable examples were concrete: validate a statistically meaningful sample (e.g., 500 checks out of 20,000 pages) and perform deeper validation for high-risk assets, then feed results back into the implementation agent. That pattern is how you get scale without pretending perfection. 

Why Enterprises Still Hesitate (And Why They’re Not Wrong) 

Despite all the progress, enterprise hesitation isn’t irrational. 

In both sessions, the same concerns came up again and again: 

Where does our data go? 
Who owns the outputs? 
Can we run this inside our walls? 
What happens when our knowledge trains someone else’s model? 

These aren’t edge cases. They’re core to regulated industries, global brands, and anyone whose CMS isn’t just a website—it’s institutional memory. 

Agentic AI adoption doesn’t fail because models aren’t good enough. It stalls because governance hasn’t caught up. 

And governance, unlike prompts, doesn’t improve overnight. 

Hard evidence that enterprise concerns are structural, not “fear”: 
Cisco’s Data Privacy Benchmark Study (surveying 2,600 security and privacy professionals across 12 countries) found: 

  • 92% view GenAI as fundamentally different, requiring new techniques to manage data and risk 
  • 69% cite concern that GenAI could hurt the organization’s legal and IP rights 
  • 68% worry information entered could be shared publicly or with competitors 
  • 68% worry results can be wrong

Why that maps directly to CMS: 
CMS programs are high-leverage because they touch brand, legal disclaimers, product claims, regulated content, personalization logic, and customer data. That makes “data governance + auditability” not a checkbox, but the adoption gate. Your roundtable’s emphasis on self-hosting options and specialized models for regulated use cases fits what security leadership is explicitly signaling. 

The Conversation Everyone Is Avoiding: Pricing 

Eventually, the discussion always comes back to money. 

Because once delivery gets faster—much faster—the old math breaks. 

If something takes five hours instead of fifty, but creates the same (or more) business value, what exactly are we charging for? 

This is where the room usually gets quiet. 

Hourly models don’t collapse because AI exists. They collapse because they stop making sense when execution is no longer scarce. 

The more honest agencies in the room admitted what many are thinking: delivery is becoming the cheapest part of the engagement. 

The value is moving upstream—strategy, discovery, integration decisions, governance, experimentation. 

AI didn’t take that value away. It exposed where it always lived. 

Evidence that pricing model pressure is real (and already underway): 

  • In professional services, pricing strategists are explicitly calling out that GenAI will force changes to “decade-old pricing models” because automation erodes the link between effort and value. 
  • TSIA’s research notes the transition away from cost-plus remains slow but is actively moving toward consumption/value models, largely driven by customer expectations and new delivery economics.
  • Mainstream business outlets have also begun documenting companies rethinking pricing strategies in the AI era as automation changes what customers perceive as worth paying for. 

How this shows up specifically in CMS: 
If AI compresses build time, agencies either (a) race to the bottom on price, or (b) reposition around outcomes: conversion lift, SEO gains, speed-to-market, governance maturity, experimentation velocity, localization coverage, and operational cost reduction. Your roundtables called out exactly this migration of value: strategy, discovery, roadmaps, SEO, integrations, A/B testing, conversion optimization. 

What 2026 Is Actually Asking of CMS Leaders 

After two sessions, dozens of perspectives, and a lot of nodding heads, I don’t think the takeaway is dramatic. 

It’s uncomfortable, but simple:

Stop asking whether agentic AI is ready. 
Start asking whether your organization is. 

Because the teams succeeding right now aren’t the ones chasing autonomy. 

They’re the ones redesigning trust. 

And that turns out to be the hardest part of all. 

A final data point that frames the urgency: 
BCG has quantified the “agentic” moment as a measurable category of value creation—estimating AI agents account for ~17% of total AI value in 2025, projected to reach 29% by 2028, and noting that more “future-built” companies are explicitly allocating budget to agents and deploying them earlier. 

That’s the competitive reality CMS leaders are walking into: not whether AI exists, but whether you can operationalize it faster than your peers—safely, observably, and with a commercial model that survives success. 

 


Upcoming Events

 

CMS Summit 26

May 12-13, 2026 – Frankfurt, Germany

The best conferences create space for honest, experience-based conversations. Not sales pitches. Not hype. Just thoughtful exchanges between people who spend their days designing, building, running, and evolving digital experiences. CMS Summit brings together people who share real stories from their work and platforms and who are interested in learning from each other on how to make things better. Over two days in Frankfurt, you can expect practitioner-led talks grounded in experience, conversations about trade-offs, constraints, and decisions, and time to compare notes with peers facing similar challenges. Space is limited for this exclusive event, so book your seats today.

 

Umbraco Codegarden 2026

June 10–11, 2026 – Copenhagen, DK

Join us in Copenhagen (or online) for the biggest Umbraco conference in the world – two full days of learning, genuine conversations, and the kind of inspiration that brings business leaders, developers, and digital creators together. Codegarden 2026 is packed with both business and tech content, from deep-dive workshops and advanced sessions to real-world case studies and strategy talks. You’ll leave with ideas, strategies, and knowledge you can put into practice immediately. Book your tickets today.

 

Open Source CMS 26

October 20–21, 2026 – Utrecht, Netherlands

Join us for the first annual edition of our prestigious international conference dedicated to making open source CMS better. This event is already being called the “missing gathering place” for the open source CMS community – an international conference with confirmed participants from Europe and North America. Be part of a friendly mix of digital leaders from notable open source CMS projects, agencies, even a few industry analysts who get together to learn, network, and talk about what really matters when it comes to creating better open source CMS projects right now and for the foreseeable future. Book your tickets today.

Agentic AI
Agentic AI
CMS Kickoff 2026
AI
AI Agents
artificial intelligence
Boye & Company
Nabil Orfali
Opinion

Kajoo product logo

Want to learn more about Kajoo?

View Product
CMS Critic Logo
  • Programs
  • Critics
  • About
  • Contact Us
  • Privacy
  • Terms
  • Disclaimer

©2026 CMS Critic. All rights reserved.