Hinweis: Dieser Artikel ist nur auf Englisch verfügbar.
Below the Model
Why do most AI initiatives stall before production - and what Red Hat Summit 2026 will reveal about the work that is actually left.
Highlights // Matthias Krohnen (Chief Transformation Officer) // 27.04.2026
Walk the floor at any major AI event this year - NVIDIA GTC, Red Hat Summit, DTW Ignite - and the message is unmistakable: the models are ready; the platforms are production-grade, the agentic era has begun. Every analyst deck shows the curve bending upward. Every vendor booth promises that 2026 is the year of AI at scale.
And yet - in the transformation programs we work across Europe, a different picture emerges. The pilots are plentiful. The business cases are written. The executive sponsorship is in place. But the path from a successful pilot to a production-grade AI capability - one that lives inside the business, on governed data, with owned operations - remains stubbornly narrow.
The industry story is about models. The company story is about everything underneath them.
That is the conversation I am going to Atlanta to have.
The production gap is real, and it is widening
The numbers have started to catch up with what practitioners have been seeing. Gartner estimates that at least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025 - not because the models failed, but because of poor data quality, inadequate risk controls, escalating costs, or unclear business value. Industry surveys consistently show that fewer than one in five AI pilots make it into sustained production.
The instinct is to treat this as a transient problem: better models, better tooling, another platform iteration, and the production rate will rise. I think the diagnosis is wrong - for the same reason the cloud-native diagnosis was wrong a decade ago. The bottleneck has moved; the conversation has not.
The model layer is not where production AI is won or lost. The platform layer is not where production AI is won or lost. Both have been solved and will be solved shortly. The layer that decides whether an AI use case ever reaches the business is the one almost nobody demos - governed data, a modern application and platform core, industrialized delivery, and an identity and access fabric robust enough to expose these capabilities to real users and partners.

„The model is the last 10% of the work.
The first 90% is whether your organization can put governed, trustworthy data in front of that model, in production, repeatably, under compliance. ”
Matthias Krohnen, Chief Transformation Officer
What "foundation" actually means
The word is overused to the point of being meaningless. In our experience, foundation - in the context of production AI - consists of four things.
Governed data. Not a data lake. Not a warehouse. A governed asset, with ownership, lineage, consent management, and regulatory readiness. Most AI use cases that stall, stall here. The model is willing; the data estate is not.
A modern platform and application core. Containerized workloads, a credible hybrid cloud footprint, and a declarative operating model. OpenShift is our assumption, but the broader point is architectural: if the underlying IT landscape cannot be released weekly, it cannot adapt to an AI workload monthly.
Industrialized delivery. CI/CD, test automation, release governance. AI amplifies whatever delivery culture sits underneath it. A high-velocity delivery organization turns AI into a product. A low velocity one turns it into another backlog.
Identity and access at enterprise scale. Every externally consumed AI capability - customer-facing, partner-facing, agent-facing - is an identity problem before it is a model problem. SSO, MFA, consent, fine-grained access, all at the scale and availability the business already depends on.
None of this is glamorous. All of them are decisive.
Why Red Hat 2026 is the right place to have this conversation
Red Hat’s agenda for 2026 maps directly onto the foundation problem, which is why we are going to Atlanta rather than to a pure-AI event. Three themes in particular.
VMware to OpenShift. vSphere 8 end-of-life in October 2027 forces most enterprises into a planned architectural decision within the next eighteen months. Done reactively, it consumes capacity that was meant for AI and new digital services. Done deliberately - with OpenShift Virtualization and Ansible Automation Platform bridging legacy workflows - it frees that capacity instead of consuming it.
Production AI, beyond the pilot. Red Hat AI Enterprise and OpenShift AI give operators a credible path from experimentation to governed operation - provided the data, platform, and delivery layers underneath are in order. The Red Hat–NVIDIA collaboration tightens the infrastructure story further. None of this replaces the foundation work; all of it rewards it.
Cloud-native at scale. Horizontal platforms, containerized workloads, automation-first operations. The reference architectures are credible. The integration and identity layers on top of the cloud-native core are where the risk actually lives - and where most programs lose momentum.
This is not a coincidence. These are the same architectural moves our clients across telecommunications, financial services, and large public-sector institutions are making - for the same reasons.
Where Tallence fits
We are new to the Red Hat partner ecosystem. The certification is in progress, not behind us. That choice is deliberate - and it is worth explaining.
For twenty-five years, Tallence has worked on the foundation layer of large European enterprises - BSS/OSS modernization, identity at scale, integration platforms, and industrialized delivery. Across multiple platform generations, the constant has been the same: the technology on top keeps changing; the discipline of making it work at enterprise scale does not.
We chose Red Hat and OpenShift as the platform we want to stand behind for the next decade because, in our assessment, it is the most credible enterprise foundation for the AI-ready era. Red Hat brings the platform. We bring the layer underneath - the data governance, the platform engineering, the delivery of industrialization, the identity fabric - that determines whether the platform decision becomes a business outcome or another expensive abstraction.
„Red Hat has the platform. We bring the foundation work that decides whether the platform decision becomes a business outcome. ”
That is the division of labor we see working. It is also the conversation we would like to have at the Summit.
Let us make it concrete
We will be in Atlanta from May 11–14, 2026, together with our CEO Frank Moll, Chief Data Officer Marc Seidemann, and Project Manager Noel Gonseth.
Our preferred format is short, diagnostic, and specific: bring the use case that is currently stuck, or the architectural decision you are about to make - and we will give you an honest perspective on what we have seen work, and what has not.

About the author
Matthias Krohnen is Chief Transformation Officer at Tallence AG, with 25 years of experience in transforming enterprise IT landscapes across Europe. His current focus lies at the intersection of platform engineering, data governance, and building enterprise foundations that are ready for AI.