
Amber Bartholomeusz is the founder of ABMarketEdge and a CMS Critic contributor.
In early 2025, CMS Critic published In a fast and furious tech world, CMS Kickoff 25 was a ‘love letter’ to slow down. At the time, it felt like a thoughtful counterweight to an industry that rarely pauses long enough to reflect.
A year later, that message reads less like a cultural observation and more like a warning that many organizations chose to politely acknowledge – and then ignore.
Instead of slowing down, the industry accelerated. AI became the headline, the shiny penny, the roadmap bullet, the board-level talking point. Every LinkedIn feed or technology blog suddenly had its own take on AI. Companies rushed to prove they were “doing AI” before they had decided what doing it well actually meant, driven by a quiet panic about being left behind.
And a lot of the fundamental work was skipped.
That is why an AI reckoning feels imminent. Not eventually. Not hypothetically. Very likely in 2026, when expectations collide with reality and the tools, workflows, and promises made start to fall apart.
The first and most obvious fault line is that AI adoption became a tactic before it was ever a strategy.
In many organizations, AI showed up as a response to pressure. Competitive pressure. Board pressure. Market pressure. But it was also a cultural pressure, amplified by social media posts, conference keynotes, event themes, and industry commentary delivered with total confidence and authority. The signal was subtle but clear: if you were not already doing it, you were behind.
The result was predictable. Tools were implemented before anyone slowed down to ask the uncomfortable questions, the ones that are actually hard to answer, and don’t fit neatly into a demo or a LinkedIn post. Questions like:
(And no, asking ChatGPT or Claude to figure this out for you is not a strategy.)
In far too many cases, the answer to those questions is still unclear.
Instead, AI has been treated like a checkbox. If it exists, it must represent progress. If it is deployed, it must be valuable. If it produces output, it must be working.
That assumption holds until it doesn’t.
Without a clear operating model, AI initiatives turn into collections of disconnected experiments. Without standards, quality and trust are impossible to measure. Without strategy, organizations automate confusion and call it momentum.
You can’t scale what you don’t control. And right now, many organizations do not actually control their AI programs. They are running them, watching the outputs, and hoping the gaps do not surface.
There is a persistent belief that AI readiness is primarily a technology problem. It never was.
Long before AI entered the picture, most organizations struggled with interoperability for a much simpler reason: alignment. Teams operated in silos. Goals conflicted. Some departments were rewarded for outcomes produced by the whole, even as the machine itself remained deeply fragmented and no one seemed to be accountable for fixing it.
AI did not resolve that tension. It exposed it.
Models trained on fragmented inputs produce fragmented outputs. Tools introduced without cross-functional buy-in struggle to gain adoption. Systems that lack transparency generate mistrust, especially among the people expected to rely on them day to day.
Before interoperability was about APIs, it was about people.
That truth did not disappear when AI arrived. It just became easier to ignore. Technology offered an illusion of mass progress. It felt easier to buy a tool than to align teams. Easier to automate than to clarify. Easier to move fast than to build shared understanding.
AI amplifies whatever organizational dynamics already exist. In cohesive organizations, it can accelerate progress. In fragmented ones, it accelerates friction.
The third issue is the least exciting and the most consequential: data fundamentals.
Data connectedness, consistency, taxonomy, and context are deeply unsexy. They don’t demo well. They don’t create buzz. You don’t see technology influencers chasing mainstage speaking slots by talking about data hygiene, and budgets rarely materialize for work that does not look impressive on LinkedIn.
So they are postponed.
Then AI gets layered on top.
Fragmented data sources produce inconsistent outputs. Poorly defined schemas lead to confident but incorrect results. Missing context turns into misleading answers delivered at scale, often with far more confidence than accuracy.
There is a growing body of research, including work out of MIT, that points to the importance of deliberate friction in complex systems. Small constraints, checkpoints, and moments of pause are not inefficiencies. They are what allow humans to apply judgment, verify assumptions, and catch errors before they compound.
In this scenario, data fundamentals are that friction.
When they are skipped in the rush to automate, AI does not compensate. It accelerates and often amplifies the failures. Outputs become harder to explain, harder to trust, and impossible to trace back to reliable sources. At that point, the problem stops being technical very quickly. It becomes operational, reputational, and much harder to dismiss. And that is usually when the finger-pointing starts.
The irony is that the most important work enabling effective AI is the work least likely to be prioritized because it is not flashy. But those fundamentals are the difference between AI as leverage – and as liability.
Taken together, these patterns create the conditions for a reckoning.
Not because AI is experimental. Not because the technology does not work.
But because much of what has been built has not yet been tested under real pressure.
Tactical adoption without a clear strategic throughline. Organizational fragmentation reframed as technical complexity. Foundational data gaps obscured by polished interfaces and confident demos.
For a while, that can look like progress.
In 2026, it starts to look like exposure.
This is when it stops being theoretical. Boards want real answers, not roadmap slides. Regulators get clearer about standards and governance. Customers notice when things don’t add up. Internal teams push back on systems that create more friction than value. And claims made in 2024 and 2025 start getting measured against what actually happened, not what was promised.
And to be clear, this is not an argument against AI. It is an argument against assuming readiness.
The organizations that struggle will not be the cautious ones. They will be the ones that moved fast without thinking through how this would actually hold together. The ones that mistook visible activity for progress and assumed technology would somehow make up for the structural gaps they never addressed.
CMS Kickoff 25 talked about slowing down as a cultural counterpoint to nonstop acceleration. Looking back, it reads less like commentary and more like advice some organizations were too busy to take.
The next phase of AI will not be decided by speed alone. It will be shaped by clarity, alignment, and whether the fundamentals underneath were ever solid, especially when that work was unglamorous and easy to overlook.
The gap between hype and reality is closing fast. When it does, it will be clear who built something that can actually hold up and who just moved quickly.
The AI reckoning is coming.
2026 might be when it stops being theoretical.

May 12-13, 2026 – Frankfurt, Germany
The best conferences create space for honest, experience-based conversations. Not sales pitches. Not hype. Just thoughtful exchanges between people who spend their days designing, building, running, and evolving digital experiences. CMS Summit brings together people who share real stories from their work and platforms and who are interested in learning from each other on how to make things better. Over two days in Frankfurt, you can expect practitioner-led talks grounded in experience, conversations about trade-offs, constraints, and decisions, and time to compare notes with peers facing similar challenges. Space is limited for this exclusive event, so book your seats today.

June 10–11, 2026 – Copenhagen, DK
Join us in Copenhagen (or online) for the biggest Umbraco conference in the world – two full days of learning, genuine conversations, and the kind of inspiration that brings business leaders, developers, and digital creators together. Codegarden 2026 is packed with both business and tech content, from deep-dive workshops and advanced sessions to real-world case studies and strategy talks. You’ll leave with ideas, strategies, and knowledge you can put into practice immediately. Book your tickets today.