Introduction to AI Adoption
When most teams talk about “AI or data risk,” the conversation always drifts toward accuracy, data quality, or scaling infrastructure. Important, sure. But almost never the real reason projects collapse. The truth is more uncomfortable: AI and data projects die not in the lab, but in the org chart. I’ve seen this firsthand in projects that looked technically flawless but never landed in the business.
The Real Reason AI Projects Fail
Gartner’s data confirms it — lack of alignment, change management, and ownership are the real killers. If you need more proof, just look at IBM Watson Health. The system was hailed as a revolution in oncology. It was pitched as a doctor’s assistant that could recommend treatments with near human accuracy. The technology itself was impressive, but hospitals struggled to integrate it. Doctors distrusted the recommendations, workflows were not redesigned, and leadership often treated it as a bolt on experiment rather than a strategy. By 2022, IBM sold Watson Health for parts. Not because the models failed in the lab, but because the adoption gap was never closed in practice.
Case 1: A Model Nobody Used
A few years ago, I worked on a major data initiative: consolidating reporting frameworks to create a single source of truth. On paper, it had everything — a strong business case, clear demand for harmonizing data, and even the CEO’s backing. But in practice, almost nobody used it for its designed purpose. Over 90% of reports were still being built in Excel after the product launch. Looking back, the problem was not technical. The BI models worked fine, and the data pipelines delivered what we promised. We had even designed for very high accuracy by making it more dependent on humans in the loop and having a final say. The real challenge was sociology: Teams weren’t ready to change their reporting habits and we were not integrating products seamlessly in their day-to-day flow. Leaders weren’t aligned on what “success” actually meant. Legacy ways of working kept pulling people back into old workflows.
Case 2: An Imperfect Start That Scaled
Not long after, I saw the opposite play out. For another AI product that I helped, the launch started rough. This was a transformative project for building recommendation systems. The model and the architecture was not the best it could be, but priority was given to deployment and adoption of the MVP version and gathering proof in multiple markets! But this time, leadership made adoption non-optional. Accuracy was tracking alone side adoption and revenue, and everyone was aligned on the KPIs. Rollouts were phased so one shaky market did not doom the whole initiative. Training and incentives were built in. Strategy was clear — to be in a position to learn and iterate fast on the MVP version and scale it globally. Because the organization persisted — and treated it as strategy, not a side experiment, adoption grew. And the product scaled and proved value, and was optimised in coming months. Within a year, it was live in a few markets and contributed over ten million euros in uplift.
The A.D.O.P.T. Framework for Closing the AI Adoption Gap
I have come to think of the core challenge as the Adoption Gap — the yawning space between what a model can do and what an organization is willing to change. Closing this gap is critical. Accuracy without adoption is a Ferrari on dirt roads: impressive on paper, but useless in practice. AI projects succeed not because of perfect models, but because organizations embrace them. Based on my experience, I have started asking these five fundamental questions before and during execution — it’s never too late! These questions directly correspond to the five pillars of the A.D.O.P.T. framework, set of questions I would clarify and align on and re-check at multiple times for the duration of the project and use this framework to escalate for help.
- Alignment (A): Are we aligned on what success actually means? Without shared metrics, teams with conflicting definitions of success will pull the project apart.
- Design for People (D): Why will people use this? The solution must be designed for real-world users and embedded in their existing workflows to avoid the friction that kills adoption.
- Ownership & Persistence (O): Who owns adoption, and is leadership treating this as a long-term strategy rather than a flashy experiment? Without sustained ownership and commitment, even promising pilots will be abandoned.
- Precision with Purpose (P): Is the model good enough to solve a real problem and deliver value now? The goal is to be fit for purpose, not to chase marginal gains in accuracy at the expense of ROI.
- Transparency (T): Is progress communicated well and openly? Highlighting successes, acknowledging limitations, and sharing results keeps teams aligned and leadership engaged.
ADOPT in the Real World
Success When All 5 Pillars Line Up
When I helped launch a recommendation system, the initial model was basic and end-to-end architecture wasn’t the best it could have been. The main focus was on rollout and adoption
- Alignment: KPIs were explicit — adoption rates and revenue uplift, not just model accuracy.
- Design for People: Recommendations were embedded directly into existing shopping flows, so users didn’t need to change behavior.
- Ownership & Persistence: A clear owner tracked adoption milestones, and leadership backed the rollout across multiple markets, even when one pilot was shaky.
- Purposeful Precision: It was shipped at “good enough”, proving ROI first and optimizing and expanding later.
- Transparency: Phased results were shared openly, which kept teams aligned and leadership engaged. Because adoption was treated as strategy and not a side experiment — the system scaled globally and delivered measurable value.
Where It Falls Apart
- Alignment: DeepMind’s NHS Streams kidney-injury tool was technically strong, but success was never defined the same way for doctors, regulators, and patients. Without shared KPIs, adoption collapsed despite accuracy.
- Design for People: Microsoft Tay showed what happens when AI isn’t designed for real-world users. Launched on Twitter without safeguards, it was hijacked within hours and shut down.
- Ownership & Persistence: As mentioned above, IBM Watson for Oncology lacked clear ownership in hospitals. Doctors distrusted it, admins didn’t enforce it, and IBM leadership abandoned it after shaky pilots.
- Purposeful Precision: Zillow Offers chased predictive accuracy in home prices but ignored workflow fit and seller adoption. Billions lost when the model didn’t translate to practice.
- Transparency: COMPAS, a recidivism risk scoring tool in US courts, operated as a black box. When bias concerns surfaced, lack of transparency destroyed trust and legitimacy.
Conclusion
The biggest risk in AI isn’t accuracy — it’s adoption. Models can be built in weeks, but changing how people work takes years. That’s why even brilliant models fail when just one pillar of ADOPT is missing, and why imperfect ones succeed when all five are in place. The rule of thumb is simple: PMs make adoption visible with data. Leaders make adoption real with persistence. Miss either, and the gap stays open. So next time someone asks, “How accurate is the model?”, ask instead: “Are we ready to adopt it?
FAQs
Q: What is the main reason AI projects fail?
A: The main reason AI projects fail is not due to technical issues, but due to the adoption gap – the gap between what a model can do and what an organization is willing to change.
Q: What is the ADOPT framework?
A: The ADOPT framework is a set of five fundamental questions to ask before and during the execution of an AI project to ensure adoption: Alignment, Design for People, Ownership & Persistence, Precision with Purpose, and Transparency.
Q: Why is alignment important in AI projects?
A: Alignment is important because it ensures that all teams and stakeholders are working towards the same goals and have a shared understanding of what success means.
Q: Can AI projects succeed without perfect models?
A: Yes, AI projects can succeed without perfect models if the organization is willing to adopt and use the model, and if the model is good enough to solve a real problem and deliver value.









