The research consensus is now overwhelming: most AI projects fail not because of the technology, but because of the decisions made before the technology is even selected. RAND Corporation identifies problem misunderstanding as the primary cause of failure. Prosci finds 63% of challenges are human. BCG reports that 60% of companies derive zero value from AI investments.[1][2][3]

Five questions, asked rigorously before any project begins, would prevent the majority of these failures. They are deceptively simple. Answering them rigorously is not.

RIGOUR

Five questions, asked rigorously before any project begins, would prevent the majority of these failures.

I

Are you solving a real problem, or following a trend?

RAND Corporation's research, based on 65 interviews with experienced data scientists and engineers, identifies a fundamental misunderstanding of the business problem as the single most common cause of AI project failure.[1] The researchers recommend that leaders commit each product team to solving a specific problem for at least a year. A discipline that directly challenges the pressure many organisations feel to demonstrate AI adoption quickly.

The diagnostic is straightforward. Can you articulate the specific business problem in measurable terms? Do you know the current cost, error rate, or time requirement of the manual process? Is AI genuinely the right tool for this problem, or would something simpler work just as well? Better training, process redesign, conventional automation. Any of these could achieve the same result at lower cost and risk.

"The reason isn't bad technology. It's that companies keep giving AI unconstrained autonomy without understanding its limitations or how it applies to their business needs."
Forbes, September 2025[4]

If your answer begins with "we want to use AI to..." rather than "we need to solve..." you are starting from the wrong end. The technology should follow the problem. When it leads, the failure rate approaches 95%.[5]

95%
of generative AI pilots fail
MIT Media Lab & Fortune, August 2025
II

Is your data actually ready?

Forty-one percent of mid-market executives cite data quality as their single greatest AI implementation challenge.[6] RAND identifies inadequate data as the second root cause of AI failure. And Integrate.io reports that 65% of data transformations fail despite significant investment.[7]

Data readiness isn't binary. It's a spectrum that should be assessed across five dimensions: completeness (do you have enough historical coverage?), accuracy (is the data clean and consistent?), accessibility (can you technically get to it, or is it trapped in silos?), governance (do you have classification, lineage tracking, and privacy controls?), and sustainability (can you continue collecting and maintaining it over time?).

Modern AI handles messy data.
That doesn't mean you skip the assessment.

Modern generative AI and RAG-based systems can work with unstructured and incomplete data far more effectively than traditional machine learning approaches. The World Economic Forum notes that modern AI handles messy, real-world data without requiring a complete IT overhaul.[8] But this doesn't mean you skip the assessment. It means you assess differently, understanding what your data can support rather than assuming it supports everything.

The red flag is when an organisation says they'll figure out data quality once the project starts. Data readiness should be assessed in the first two weeks, before any building begins. Discovering that your data is insufficient in week eight is far more expensive than discovering it in week one.

III

Can your organisation absorb the change?

Prosci's study of 1,107 change management professionals produced the most important finding in AI implementation research: 63% of all AI implementation challenges stem from human factors, not technical ones. User proficiency alone, the ability of people to actually use the system, accounts for 38% of all failure points.[2]

McKinsey reinforces this: 48% of US employees would use generative AI more frequently if they received formal training.[9] And the trust gap between executives (who tend to be enthusiastic about AI) and frontline workers (who tend to be sceptical) means that even technically excellent deployments can fail at the adoption stage.

89%
recognise the need for AI skills, only 6% have begun
McKinsey & Company, 2025

Prosci identifies five factors that predict AI adoption success: democratised AI expertise (building capabilities across the entire workforce, not just IT), individual choice (letting employees choose their own AI tools), internal skills development (which consistently outperforms reliance on external consultants), an experimentation culture (the single most significant factor for adoption), and comprehensive scale (larger, well-resourced initiatives paradoxically succeed more often than cautious, small-scale tests).[2]

Here's the uncomfortable truth: if your organisation has a history of failed technology implementations, poor training programmes, or resistance to process change, AI will not be different. The same organisational dynamics that killed your last digital transformation will kill your AI initiative, unless you address them explicitly before the project begins.

IV

Do you have governance in place, before you build?

The EU AI Act becomes fully applicable on 2 August 2026. For financial services, insurance, and other regulated sectors, most AI applications fall under high-risk classification. This requires conformity assessments, technical documentation, risk management systems, and human oversight.[10]

EY's February 2026 research shows that 90% of successful financial services firms have established C-suite or board-level accountability for AI outcomes. Eighty-four percent prioritise responsible, ethical AI deployment even at the cost of short-term speed.[11] Governance is not optional. The organisations that treat it as an enabler rather than an obstacle deploy AI faster and more profitably.

Typical mid-market AI readiness across five dimensions
Problem Clarity68%
Data Readiness42%
Organisational Capability51%
Governance Maturity34%
Partner Fit Assessment47%

Three frameworks should be on every regulated organisation's radar before any AI project begins:

1

NIST AI Risk Management Framework

A voluntary, comprehensive framework built around four functions: Govern, Map, Measure, and Manage. Specifically designed to be proportionate. Suitable for organisations of all sizes. Free and publicly available.[12]

2

EU AI Act

Mandatory if you operate in or serve the EU. Risk-based classification system with extraterritorial scope. High-risk AI systems require conformity assessments, documentation, and human oversight. Fully applicable August 2026.[10]

3

ISO/IEC 42001:2023

The first international AI management system standard. Covers the full AI lifecycle from development to monitoring. Provides a certifiable framework aligned with GDPR, NIST, and the EU AI Act.[13]

The red flag is any AI initiative that plans to "add governance later." In regulated industries, governance is a design constraint. Building a system and then trying to make it compliant is like building a house and then trying to add the foundation.

V

Are you buying from a specialist, or a generalist?

MIT's Project NANDA produced the clearest evidence on this question: purchasing AI from specialised vendors succeeds approximately 67% of the time. Internal builds succeed about 22% of the time. This gap is particularly pronounced in regulated industries, where domain expertise matters. Understanding compliance requirements, data structures, and operational workflows isn't a nice-to-have. It's a prerequisite for production deployment.[5]

The traditional build-versus-buy binary is also increasingly obsolete. HatchWorks, Gartner, and Forrester now advocate an "own versus orchestrate" model: buy the compliance-heavy foundation (the system of record), and build the differentiating intelligence layer on top.[14]

"The last 20%. Security, governance, observability, performance, reliability, data quality, change management. That's still 80% of the effort."
HatchWorks, January 2026[14]

When evaluating a potential partner, the critical differentiators are not the methodology steps. Every firm claims to start with the problem, focus on ROI, and handle compliance. The real differentiators are domain expertise (deep knowledge of your specific regulated industry), track record (production systems, not just pilots), speed (delivery in weeks, not months), and ownership (you own the IP and can operate independently when the engagement ends).

If your potential partner has never deployed a production AI system in your industry, they will learn on your budget. If they cannot point to systems that are running today, not pilots or proofs of concept, their claims about production deployment are theoretical.

95%
of pilots fail
MIT Media Lab
63%
of challenges are human
Prosci Research
41%
cite data quality as obstacle
RSM Mid-Market Study
89% / 6%
recognise need vs. begun
McKinsey 2025
67%
specialist vendor success
MIT Project NANDA
65%
data transformations fail
Integrate.io 2026
VI

The Honest Assessment

These five questions function as a self-diagnostic. If you can answer all five confidently, with evidence and not aspiration, you're in a strong position to begin an AI project. If you cannot, that's not a failure. It's valuable information that prevents the far larger failure of a misaligned, underprepared implementation.

Not every organisation needs custom AI. Not every organisation needs it now. The organisations in the 5% who succeed are not the ones who moved fastest. They're the ones who moved most deliberately. With clear problems, ready data, prepared people, embedded governance, and the right partners.

5%
who succeed move most deliberately
Muuvment Labs Analysis
BEGIN

The questions are simple. Answering them with rigour is the hard part. It is also the most important thing you will do before spending a single pound on AI.

Sources & References

  1. 1RAND Corporation, "The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed," RRA2680-1, August 2024. rand.org
  2. 2Prosci, "Why AI Transformation Fails: Research Insights from 1,100+ Change Professionals," September 2025. prosci.com
  3. 3Boston Consulting Group, "Are You Generating Value from AI? The Widening Gap," September 2025. bcg.com
  4. 4Forbes, "Why 85% Of AI Projects Fail, and 4 Ways To Be In The 15% That Succeed," September 2025. forbes.com
  5. 5MIT Media Lab / Fortune, "MIT report: 95% of generative AI pilots at companies failing," August 2025. fortune.com
  6. 6RSM, "Analyzing AI trends in the middle market," 2025. rsmus.com
  7. 7Integrate.io, "Data Transformation Challenge Statistics," January 2026. integrate.io
  8. 8World Economic Forum, "AI's mid-market moment: Why the next growth revolution will come from the middle," January 2026. weforum.org
  9. 9McKinsey & Company, "5 steps for change management in the gen AI age," August 2025. mckinsey.com
  10. 10Consultancy.eu, "The EU AI Act: The impact on financial services institutions," January 2025. consultancy.eu
  11. 11EY, "Financial Services AI Research: ROI and Governance Maturity," February 2026. ey.com
  12. 12NIST, "Artificial Intelligence Risk Management Framework (AI RMF 1.0)," 2023. nist.gov
  13. 13ISMS.online, "ISO 42001: Ultimate Implementation Guide 2025," September 2025. isms.online
  14. 14HatchWorks, "The Build vs Buy Framework in the Age of AI," January 2026. hatchworks.com