The research consensus is now overwhelming: most AI projects fail not because of the technology, but because of the decisions made before the technology is even selected. RAND Corporation identifies problem misunderstanding as the primary cause of failure. Prosci finds 63% of challenges are human. BCG reports that 60% of companies derive zero value from AI investments.[1][2][3]
Five questions, asked rigorously before any project begins, would prevent the majority of these failures. They are deceptively simple. Answering them rigorously is not.
Five questions, asked rigorously before any project begins, would prevent the majority of these failures.
Are you solving a real problem, or following a trend?
The diagnostic is straightforward. Can you articulate the specific business problem in measurable terms? Do you know the current cost, error rate, or time requirement of the manual process? Is AI genuinely the right tool for this problem, or would something simpler work just as well? Better training, process redesign, conventional automation. Any of these could achieve the same result at lower cost and risk.
"The reason isn't bad technology. It's that companies keep giving AI unconstrained autonomy without understanding its limitations or how it applies to their business needs."Forbes, September 2025[4]
If your answer begins with "we want to use AI to..." rather than "we need to solve..." you are starting from the wrong end. The technology should follow the problem. When it leads, the failure rate approaches 95%.[5]
Is your data actually ready?
Data readiness isn't binary. It's a spectrum that should be assessed across five dimensions: completeness (do you have enough historical coverage?), accuracy (is the data clean and consistent?), accessibility (can you technically get to it, or is it trapped in silos?), governance (do you have classification, lineage tracking, and privacy controls?), and sustainability (can you continue collecting and maintaining it over time?).
Modern generative AI and RAG-based systems can work with unstructured and incomplete data far more effectively than traditional machine learning approaches. The World Economic Forum notes that modern AI handles messy, real-world data without requiring a complete IT overhaul.[8] But this doesn't mean you skip the assessment. It means you assess differently, understanding what your data can support rather than assuming it supports everything.
The red flag is when an organisation says they'll figure out data quality once the project starts. Data readiness should be assessed in the first two weeks, before any building begins. Discovering that your data is insufficient in week eight is far more expensive than discovering it in week one.
Can your organisation absorb the change?
McKinsey reinforces this: 48% of US employees would use generative AI more frequently if they received formal training.[9] And the trust gap between executives (who tend to be enthusiastic about AI) and frontline workers (who tend to be sceptical) means that even technically excellent deployments can fail at the adoption stage.
Prosci identifies five factors that predict AI adoption success: democratised AI expertise (building capabilities across the entire workforce, not just IT), individual choice (letting employees choose their own AI tools), internal skills development (which consistently outperforms reliance on external consultants), an experimentation culture (the single most significant factor for adoption), and comprehensive scale (larger, well-resourced initiatives paradoxically succeed more often than cautious, small-scale tests).[2]
Here's the uncomfortable truth: if your organisation has a history of failed technology implementations, poor training programmes, or resistance to process change, AI will not be different. The same organisational dynamics that killed your last digital transformation will kill your AI initiative, unless you address them explicitly before the project begins.
Do you have governance in place, before you build?
EY's February 2026 research shows that 90% of successful financial services firms have established C-suite or board-level accountability for AI outcomes. Eighty-four percent prioritise responsible, ethical AI deployment even at the cost of short-term speed.[11] Governance is not optional. The organisations that treat it as an enabler rather than an obstacle deploy AI faster and more profitably.
Three frameworks should be on every regulated organisation's radar before any AI project begins:
NIST AI Risk Management Framework
A voluntary, comprehensive framework built around four functions: Govern, Map, Measure, and Manage. Specifically designed to be proportionate. Suitable for organisations of all sizes. Free and publicly available.[12]
EU AI Act
Mandatory if you operate in or serve the EU. Risk-based classification system with extraterritorial scope. High-risk AI systems require conformity assessments, documentation, and human oversight. Fully applicable August 2026.[10]
ISO/IEC 42001:2023
The first international AI management system standard. Covers the full AI lifecycle from development to monitoring. Provides a certifiable framework aligned with GDPR, NIST, and the EU AI Act.[13]
The red flag is any AI initiative that plans to "add governance later." In regulated industries, governance is a design constraint. Building a system and then trying to make it compliant is like building a house and then trying to add the foundation.
Are you buying from a specialist, or a generalist?
The traditional build-versus-buy binary is also increasingly obsolete. HatchWorks, Gartner, and Forrester now advocate an "own versus orchestrate" model: buy the compliance-heavy foundation (the system of record), and build the differentiating intelligence layer on top.[14]
"The last 20%. Security, governance, observability, performance, reliability, data quality, change management. That's still 80% of the effort."HatchWorks, January 2026[14]
When evaluating a potential partner, the critical differentiators are not the methodology steps. Every firm claims to start with the problem, focus on ROI, and handle compliance. The real differentiators are domain expertise (deep knowledge of your specific regulated industry), track record (production systems, not just pilots), speed (delivery in weeks, not months), and ownership (you own the IP and can operate independently when the engagement ends).
If your potential partner has never deployed a production AI system in your industry, they will learn on your budget. If they cannot point to systems that are running today, not pilots or proofs of concept, their claims about production deployment are theoretical.
The Honest Assessment
Not every organisation needs custom AI. Not every organisation needs it now. The organisations in the 5% who succeed are not the ones who moved fastest. They're the ones who moved most deliberately. With clear problems, ready data, prepared people, embedded governance, and the right partners.
The questions are simple. Answering them with rigour is the hard part. It is also the most important thing you will do before spending a single pound on AI.
Sources & References
- 1RAND Corporation, "The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed," RRA2680-1, August 2024. rand.org
- 2Prosci, "Why AI Transformation Fails: Research Insights from 1,100+ Change Professionals," September 2025. prosci.com
- 3Boston Consulting Group, "Are You Generating Value from AI? The Widening Gap," September 2025. bcg.com
- 4Forbes, "Why 85% Of AI Projects Fail, and 4 Ways To Be In The 15% That Succeed," September 2025. forbes.com
- 5MIT Media Lab / Fortune, "MIT report: 95% of generative AI pilots at companies failing," August 2025. fortune.com
- 6RSM, "Analyzing AI trends in the middle market," 2025. rsmus.com
- 7Integrate.io, "Data Transformation Challenge Statistics," January 2026. integrate.io
- 8World Economic Forum, "AI's mid-market moment: Why the next growth revolution will come from the middle," January 2026. weforum.org
- 9McKinsey & Company, "5 steps for change management in the gen AI age," August 2025. mckinsey.com
- 10Consultancy.eu, "The EU AI Act: The impact on financial services institutions," January 2025. consultancy.eu
- 11EY, "Financial Services AI Research: ROI and Governance Maturity," February 2026. ey.com
- 12NIST, "Artificial Intelligence Risk Management Framework (AI RMF 1.0)," 2023. nist.gov
- 13ISMS.online, "ISO 42001: Ultimate Implementation Guide 2025," September 2025. isms.online
- 14HatchWorks, "The Build vs Buy Framework in the Age of AI," January 2026. hatchworks.com