Something strange is happening in enterprise AI. Companies are spending more than ever on it. Global AI investment surpassed $200 billion in 2025. Yet the vast majority of projects stall before delivering a single dollar of value. The pilot works. The demo impresses. And then, nothing.
This pattern is not new, but the scale of the waste is accelerating. In August 2025, MIT's Project NANDA published findings that stopped boardrooms cold: up to 95% of generative AI pilots are failing to produce a return on investment.[1] The RAND Corporation, drawing on interviews with 65 experienced data scientists and engineers, found that AI projects fail at roughly double the rate of conventional IT projects, with more than 80% never reaching production.[2]
Boston Consulting Group's September 2025 analysis was equally sobering: 60% of companies deploying AI are deriving zero material value from their investments. Only 5% qualify as "future-built," meaning organisations genuinely generating returns.[3]
These are not marginal failures. They represent hundreds of billions in misallocated capital, thousands of abandoned initiatives, and a growing crisis of confidence among the executives who approved the budgets. S&P Global found that 42% of companies had abandoned most of their AI initiatives by 2025, up from just 17% the year before.[4]
The Five Root Causes
Misunderstanding the problem
The primary cause of AI failure is not technical. It is a fundamental misunderstanding of the business problem. Stakeholders miscommunicate what needs solving, engineers build to specifications that miss the actual need, and the resulting system solves the wrong problem elegantly.
Missing or inadequate data
Organisations lack the data necessary to train an effective model. RSM's 2025 middle-market survey found that 41% of executives cite data quality as their single greatest AI implementation challenge.[5]
Technology-first thinking
The organisation focuses on deploying the latest AI technology rather than solving a real, measurable problem. This "solution looking for a problem" pattern is everywhere in 2025 and 2026.
Infrastructure gaps
The organisation lacks adequate infrastructure for data management and model deployment. The system works in the lab but cannot function in production environments with legacy systems and real-world data flows.
Unsolvable problems
AI is applied to problems that are genuinely too difficult for current technology, or where real-world constraints make a technically possible solution practically impossible.
The implementation does not.
The Human Factor
McKinsey's August 2025 research reinforces this: 48% of US employees would use generative AI more frequently if they simply received formal training.[7] The tools are there. The capability exists. What is missing is the bridge between technology and the people expected to use it.
A technically brilliant AI solution is doomed to fail if employees and regulators cannot trust its outputs.CIO.com, July 2025[8]
Prosci's data also reveals a significant trust gap between executives (who tend to be highly confident in AI's potential) and frontline workers (who remain sceptical). This asymmetry is dangerous. Leadership greenlights projects that the people responsible for operating them neither understand nor trust. The result is technically functional systems that nobody uses.
The Mid-Market Trap
RSM's 2025 survey of the middle market found that 92% of executives experienced challenges with AI implementation. Sixty-two percent said generative AI was harder to implement than expected. And 70% of those using generative AI report they need outside assistance.[5]
The result is a structural gap. Enterprise organisations can absorb pilot failures and iterate. Mid-market companies cannot. As CBS News reported in January 2026, these organisations "do not have the luxury of prolonged pilot programmes or unclear returns."[9]
Yet the World Economic Forum estimates that mid-market companies, which collectively represent one-third of private-sector GDP in developed economies, could capture at least $2 trillion of generative AI's projected $6 to $8 trillion in potential value.[11] The opportunity is enormous. The execution pathway is narrow.
The Build-vs-Buy Delusion
This three-to-one success ratio is particularly relevant in financial services and other highly regulated sectors, precisely the industries where the consequences of failure extend beyond lost revenue into regulatory exposure and reputational damage.
The data does not argue against building at all. It argues against building alone, without domain expertise, and without the infrastructure that comes from having solved similar problems before. The organisations that succeed are not choosing between "build" and "buy" in the traditional sense. They are making a more sophisticated decision about what to own and what to orchestrate.
We paid $2 million for an AI strategy and pilot. The pilot worked beautifully in their controlled environment. When we tried to implement it with our actual data and systems, we discovered it would require completely rebuilding our data architecture.Anonymous mid-market CFO, via DAS Advanced Systems[10]
This is the "handoff problem" that DAS Advanced Systems has documented: large consultancies create strategy and run pilots, then disappear when the difficult work of production implementation begins. The pilot succeeds in a controlled environment. Real-world deployment, with legacy systems, messy data, and compliance requirements, is left to the client, who never had the capability to execute it.
The Compliance Freeze
This compliance paralysis is rational. The EU AI Act becomes fully applicable on 2 August 2026. Financial services AI systems frequently fall under the Act's high-risk classification, requiring conformity assessments, technical documentation, risk management systems, and human oversight mechanisms.[13] For organisations that have not begun preparing, the regulatory deadline is now less than six months away.
EY's February 2026 financial services research found that 84% of financial services CEOs prioritise responsible, ethical AI deployment even if it slows short-term commercial gains. The 25% who report that their AI initiatives have significantly outperformed expectations are also the organisations with the most mature governance structures.[14]
What the 5% Do Differently
They start with the problem, not the technology
Successful organisations define the business problem in measurable terms before selecting any AI approach. They ask "what outcome do we need?" rather than "what can AI do?" RAND's researchers recommend committing each product team to solving a specific problem for at least a year.[2]
They invest in people, not just platforms
Prosci's data is clear: organisations investing in developing their own people see consistently better results than those depending heavily on outside consultants. The role of an external partner should be to accelerate internal capability, not replace it.[6]
They embed governance from day one
Ninety percent of successful financial services firms have established C-suite or board-level accountability for AI outcomes. Governance is not a phase that follows deployment. It is a design constraint that shapes every decision from the outset.[14]
They partner with domain specialists
The MIT data showing 67% success with specialised vendors versus 22% for internal builds reflects a deeper truth: regulated industries need partners who already understand the compliance landscape, the data structures, and the operational workflows. Not generalists learning on the client's budget.[1]
They use stage-gate investment decisions
Rather than committing large budgets upfront, successful organisations use gate-based approaches where each phase must demonstrate measurable value before the next is funded. IBM's consulting practice recommends asking "should we spend more money?" at each gate, not "is this good enough?"[15]
The Path Forward
The question for any organisation considering AI in 2026 is not whether the technology works. It is whether they are prepared to implement it in a way that actually delivers value, with the right problem definition, the right data foundation, the right people investment, and the right governance framework.
For mid-market companies in regulated industries, the stakes are particularly high. They cannot afford enterprise-scale failure. But they also cannot afford to wait. The cost of AI inference is dropping by a factor of ten every year.[11] The technology is becoming more accessible. The regulatory landscape is crystallising. The organisations that act now, with discipline rather than haste, will define the next decade of their industries.
Sources & References
- 1MIT Media Lab / Fortune, "MIT report: 95% of generative AI pilots at companies failing," August 2025. fortune.com
- 2RAND Corporation, "The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed," Research Report RRA2680-1, August 2024. rand.org
- 3Boston Consulting Group, "Are You Generating Value from AI? The Widening Gap," September 2025. bcg.com
- 4Forbes / S&P Global, "Why 85% Of AI Projects Fail," September 2025. forbes.com
- 5RSM, "Analysing AI trends in the middle market," 2025. rsmus.com
- 6Prosci, "Why AI Transformation Fails: Research Insights from 1,100+ Change Professionals," September 2025. prosci.com
- 7McKinsey & Company, "5 steps for change management in the gen AI age," August 2025. mckinsey.com
- 8CIO.com, "Designing for humans: Why most enterprise adoptions of AI fail," July 2025. cio.com
- 9CBS News / JPMorgan, "Why the mid-market will determine AI's economic impact," January 2026. cbsnews.com
- 10DAS Advanced Systems, "Why Big 4 Consulting Firms Are Failing Mid-Size Companies with AI." dasadvancedsystems.com
- 11World Economic Forum, "AI's mid-market moment," January 2026. weforum.org
- 12Stardog, "Why generative AI projects fail in highly regulated industries," May 2024. stardog.com
- 13Consultancy.eu, "The EU AI Act: The impact on financial services institutions," January 2025. consultancy.eu
- 14EY, "Financial Services AI Research: ROI and Governance Maturity," February 2026. ey.com
- 15IBM Consulting, "Measuring AI outcomes: 7-step stage-gating framework." ibm.com