Something strange is happening in enterprise AI. Companies are spending more than ever on it. Global AI investment surpassed $200 billion in 2025. Yet the vast majority of projects stall before delivering a single dollar of value. The pilot works. The demo impresses. And then, nothing.

This pattern is not new, but the scale of the waste is accelerating. In August 2025, MIT's Project NANDA published findings that stopped boardrooms cold: up to 95% of generative AI pilots are failing to produce a return on investment.[1] The RAND Corporation, drawing on interviews with 65 experienced data scientists and engineers, found that AI projects fail at roughly double the rate of conventional IT projects, with more than 80% never reaching production.[2]

Boston Consulting Group's September 2025 analysis was equally sobering: 60% of companies deploying AI are deriving zero material value from their investments. Only 5% qualify as "future-built," meaning organisations genuinely generating returns.[3]

$200B
Global AI investment in 2025. Yet the vast majority of projects stall before delivering a single dollar of value.
Stanford HAI AI Index, 2025

These are not marginal failures. They represent hundreds of billions in misallocated capital, thousands of abandoned initiatives, and a growing crisis of confidence among the executives who approved the budgets. S&P Global found that 42% of companies had abandoned most of their AI initiatives by 2025, up from just 17% the year before.[4]

FAILURE
The question is no longer whether AI works. It demonstrably does, for the minority who get implementation right. The real question is why the gap between potential and reality remains so vast.
I

The Five Root Causes

RAND's study is the most rigorous attempt to date to identify why AI projects fail. Based on structured interviews with practitioners who have collectively shipped hundreds of AI systems, the researchers isolated five recurring causes of failure.[2]
1

Misunderstanding the problem

The primary cause of AI failure is not technical. It is a fundamental misunderstanding of the business problem. Stakeholders miscommunicate what needs solving, engineers build to specifications that miss the actual need, and the resulting system solves the wrong problem elegantly.

2

Missing or inadequate data

Organisations lack the data necessary to train an effective model. RSM's 2025 middle-market survey found that 41% of executives cite data quality as their single greatest AI implementation challenge.[5]

3

Technology-first thinking

The organisation focuses on deploying the latest AI technology rather than solving a real, measurable problem. This "solution looking for a problem" pattern is everywhere in 2025 and 2026.

4

Infrastructure gaps

The organisation lacks adequate infrastructure for data management and model deployment. The system works in the lab but cannot function in production environments with legacy systems and real-world data flows.

5

Unsolvable problems

AI is applied to problems that are genuinely too difficult for current technology, or where real-world constraints make a technically possible solution practically impossible.

The technology works.
The implementation does not.
Not one of RAND's five root causes is a failure of the AI itself. Every cause is organisational, strategic, or operational. Stakeholders miscommunicate what needs solving, engineers build to specifications that miss the actual need, and the resulting system solves the wrong problem elegantly.
THE GAP 87% never reach production pilot → production 13% deliver real value RAND Corporation, 2024 · BCG, 2025
87% never reach production THE GAP 13% deliver real value RAND Corporation, 2024 · BCG, 2025
II

The Human Factor

Prosci's September 2025 research, drawing on data from 1,107 change management professionals, quantified something practitioners have long suspected: the majority of AI implementation challenges are human, not technical.[6]
63%
of all AI implementation challenges stem from human factors, not technical ones
Prosci, September 2025. 1,107 change professionals surveyed
Where AI Implementation Actually Fails
User proficiency (learning curve, prompt engineering, training)38%
22% learning curve · 11% prompt engineering · 6% inadequate training
Technical challenges16%
Organisational adoption15%
Data quality13%
HUMAN VS TECHNICAL FAILURE Human factors: 63% Tech 16% Data 13% The technology isn't the problem

McKinsey's August 2025 research reinforces this: 48% of US employees would use generative AI more frequently if they simply received formal training.[7] The tools are there. The capability exists. What is missing is the bridge between technology and the people expected to use it.

A technically brilliant AI solution is doomed to fail if employees and regulators cannot trust its outputs.
CIO.com, July 2025[8]

Prosci's data also reveals a significant trust gap between executives (who tend to be highly confident in AI's potential) and frontline workers (who remain sceptical). This asymmetry is dangerous. Leadership greenlights projects that the people responsible for operating them neither understand nor trust. The result is technically functional systems that nobody uses.

III

The Mid-Market Trap

The failure rates are not distributed equally. Mid-market companies face a structurally different challenge from enterprise organisations. They are held to enterprise-level expectations without enterprise-level resources.[9]

RSM's 2025 survey of the middle market found that 92% of executives experienced challenges with AI implementation. Sixty-two percent said generative AI was harder to implement than expected. And 70% of those using generative AI report they need outside assistance.[5]

For a $300M company, a $5M AI bet is 1.7% of total revenue
That is a massive, high-risk commitment with no guaranteed outcome. Mid-market companies cannot afford the Big 4's typical AI engagement. Strategy alone costs $500K to $1M. A pilot is another $1M to $2M. Full implementation: $3M to $10M. Enterprise organisations can absorb pilot failures and iterate. Mid-market companies cannot.[10]

The result is a structural gap. Enterprise organisations can absorb pilot failures and iterate. Mid-market companies cannot. As CBS News reported in January 2026, these organisations "do not have the luxury of prolonged pilot programmes or unclear returns."[9]

Yet the World Economic Forum estimates that mid-market companies, which collectively represent one-third of private-sector GDP in developed economies, could capture at least $2 trillion of generative AI's projected $6 to $8 trillion in potential value.[11] The opportunity is enormous. The execution pathway is narrow.

TYPICAL BIG 4 AI ENGAGEMENT Strategy $500K – $1M Pilot $1M – $2M Implementation $3M – $10M Total $5M+ For a $300M revenue company, this is 1.7% of total revenue committed to a single initiative with no guaranteed outcome. Source: DAS Advanced Systems
IV

The Build-vs-Buy Delusion

MIT's Project NANDA produced one statistic that should inform every AI investment decision in regulated industries: purchasing AI from specialised vendors succeeds approximately 67% of the time. Internal builds succeed only about 22% of the time.[1]

This three-to-one success ratio is particularly relevant in financial services and other highly regulated sectors, precisely the industries where the consequences of failure extend beyond lost revenue into regulatory exposure and reputational damage.

The data does not argue against building at all. It argues against building alone, without domain expertise, and without the infrastructure that comes from having solved similar problems before. The organisations that succeed are not choosing between "build" and "buy" in the traditional sense. They are making a more sophisticated decision about what to own and what to orchestrate.

We paid $2 million for an AI strategy and pilot. The pilot worked beautifully in their controlled environment. When we tried to implement it with our actual data and systems, we discovered it would require completely rebuilding our data architecture.
Anonymous mid-market CFO, via DAS Advanced Systems[10]

This is the "handoff problem" that DAS Advanced Systems has documented: large consultancies create strategy and run pilots, then disappear when the difficult work of production implementation begins. The pilot succeeds in a controlled environment. Real-world deployment, with legacy systems, messy data, and compliance requirements, is left to the client, who never had the capability to execute it.

V

The Compliance Freeze

In regulated industries, there is an additional and often underestimated barrier: fear. Thirty percent of financial services organisations have banned generative AI internally. Not because it does not work, but because they cannot yet prove that it works safely.[12]

This compliance paralysis is rational. The EU AI Act becomes fully applicable on 2 August 2026. Financial services AI systems frequently fall under the Act's high-risk classification, requiring conformity assessments, technical documentation, risk management systems, and human oversight mechanisms.[13] For organisations that have not begun preparing, the regulatory deadline is now less than six months away.

AI REGULATORY TIMELINE 2023 NIST AI RMF 2024 EU AI Act enters force ISO 42001 published NOW Feb 2026 2 Aug 2026 EU AI Act Full Application 2027 OSFI E-23 ~5 months from today Consultancy.eu · OSFI · European Commission
AI REGULATORY TIMELINE 2023: NIST AI RMF 2024 EU AI Act · ISO 42001 NOW Feb 2026 2 Aug 2026 EU AI Act Full Application ~5 months from today 2027: OSFI E-23
GOVERN
Governance, properly implemented, is not the obstacle. It is the enabler. The organisations that outperform are also those with the most mature governance structures.

EY's February 2026 financial services research found that 84% of financial services CEOs prioritise responsible, ethical AI deployment even if it slows short-term commercial gains. The 25% who report that their AI initiatives have significantly outperformed expectations are also the organisations with the most mature governance structures.[14]

VI

What the 5% Do Differently

BCG's analysis identifies clear behavioural differences between the 5% of organisations generating real AI value and everyone else. The successful cohort achieves five times the revenue increases and three times the cost reductions of their peers.[3]
1

They start with the problem, not the technology

Successful organisations define the business problem in measurable terms before selecting any AI approach. They ask "what outcome do we need?" rather than "what can AI do?" RAND's researchers recommend committing each product team to solving a specific problem for at least a year.[2]

2

They invest in people, not just platforms

Prosci's data is clear: organisations investing in developing their own people see consistently better results than those depending heavily on outside consultants. The role of an external partner should be to accelerate internal capability, not replace it.[6]

3

They embed governance from day one

Ninety percent of successful financial services firms have established C-suite or board-level accountability for AI outcomes. Governance is not a phase that follows deployment. It is a design constraint that shapes every decision from the outset.[14]

4

They partner with domain specialists

The MIT data showing 67% success with specialised vendors versus 22% for internal builds reflects a deeper truth: regulated industries need partners who already understand the compliance landscape, the data structures, and the operational workflows. Not generalists learning on the client's budget.[1]

5

They use stage-gate investment decisions

Rather than committing large budgets upfront, successful organisations use gate-based approaches where each phase must demonstrate measurable value before the next is funded. IBM's consulting practice recommends asking "should we spend more money?" at each gate, not "is this good enough?"[15]

THE 5% DIFFERENCE 95% of companies Marginal or zero AI value captured vs 5% "future-built" 5× revenue increase · 3× cost reduction Boston Consulting Group, September 2025
THE 5% DIFFERENCE 95% of companies Marginal or zero AI value captured vs 5% "future-built" 5× revenue · 3× cost reduction
95%
of generative AI pilots failing to produce a return on investment
MIT Project NANDA, Aug 2025
42%
of companies had abandoned most AI initiatives by 2025, up from 17% the year before
S&P Global, 2025
60%
of companies deploying AI derive zero material value from their investments
BCG, Sep 2025
63%
of all AI implementation challenges stem from human factors, not technical ones
Prosci, Sep 2025
67%
success rate when purchasing AI from specialised vendors vs. 22% for internal builds
MIT Project NANDA, 2025
5%
of organisations qualify as "future-built," genuinely generating returns from AI
BCG, Sep 2025

The Path Forward

The question for any organisation considering AI in 2026 is not whether the technology works. It is whether they are prepared to implement it in a way that actually delivers value, with the right problem definition, the right data foundation, the right people investment, and the right governance framework.

For mid-market companies in regulated industries, the stakes are particularly high. They cannot afford enterprise-scale failure. But they also cannot afford to wait. The cost of AI inference is dropping by a factor of ten every year.[11] The technology is becoming more accessible. The regulatory landscape is crystallising. The organisations that act now, with discipline rather than haste, will define the next decade of their industries.

87%
fail because they skip the hard work. The 5% succeed because they do it first.

Sources & References

  1. 1MIT Media Lab / Fortune, "MIT report: 95% of generative AI pilots at companies failing," August 2025. fortune.com
  2. 2RAND Corporation, "The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed," Research Report RRA2680-1, August 2024. rand.org
  3. 3Boston Consulting Group, "Are You Generating Value from AI? The Widening Gap," September 2025. bcg.com
  4. 4Forbes / S&P Global, "Why 85% Of AI Projects Fail," September 2025. forbes.com
  5. 5RSM, "Analysing AI trends in the middle market," 2025. rsmus.com
  6. 6Prosci, "Why AI Transformation Fails: Research Insights from 1,100+ Change Professionals," September 2025. prosci.com
  7. 7McKinsey & Company, "5 steps for change management in the gen AI age," August 2025. mckinsey.com
  8. 8CIO.com, "Designing for humans: Why most enterprise adoptions of AI fail," July 2025. cio.com
  9. 9CBS News / JPMorgan, "Why the mid-market will determine AI's economic impact," January 2026. cbsnews.com
  10. 10DAS Advanced Systems, "Why Big 4 Consulting Firms Are Failing Mid-Size Companies with AI." dasadvancedsystems.com
  11. 11World Economic Forum, "AI's mid-market moment," January 2026. weforum.org
  12. 12Stardog, "Why generative AI projects fail in highly regulated industries," May 2024. stardog.com
  13. 13Consultancy.eu, "The EU AI Act: The impact on financial services institutions," January 2025. consultancy.eu
  14. 14EY, "Financial Services AI Research: ROI and Governance Maturity," February 2026. ey.com
  15. 15IBM Consulting, "Measuring AI outcomes: 7-step stage-gating framework." ibm.com