Overview
6-Layer Architecture
The assessment is built on six layers that work together to produce a report calibrated to your specific organisation, industry, and jurisdiction. Each layer is independent — if one encounters an issue, the others continue and a report is still produced.
Questions
Assessment Chapters
The 17 questions are organised into five chapters. Each chapter probes a distinct organisational capability that determines AI readiness.
Scoring
The 6 Scoring Dimensions
Each dimension is scored 0–100 based on your specific answers. The overall maturity score is a weighted average of all six dimensions. Dimension scores are constrained by deterministic anchors — they cannot exceed mathematically defined ranges regardless of AI model output.
| Dimension | Weight | What it measures |
|---|---|---|
| Data | 25% | Quality, governance, and accessibility of organisational data for AI use |
| Governance | 20% | Oversight structures, ethical frameworks, and accountability for AI decisions |
| Strategy | 20% | Executive commitment, strategic alignment, and cultural readiness for AI |
| People | 15% | Workforce AI literacy, dedicated capability, and skills development |
| Execution | 15% | Delivery track record, tooling, and operational readiness to run AI systems |
| Regulatory | 5% | Awareness of and preparedness for applicable AI and data regulations |
Maturity levels are determined by overall score:
| Level | Score range | What it means |
|---|---|---|
| Nascent | 0–20 | AI is not yet part of organisational thinking. Foundational work is needed before any AI projects. |
| Emerging | 21–40 | Early awareness exists but no structured approach. Key gaps in data, governance, or accountability remain. |
| Developing | 41–60 | Some groundwork is in place. Ready for a focused pilot with proper governance. |
| Established | 61–80 | Strong foundations across most dimensions. Ready to scale AI initiatives. |
| Advanced | 81–100 | Mature AI capability with governance, measurement, and continuous improvement in place. |
Dimension letter grades: A (80–100), B (60–79), C (40–59), D (20–39), F (0–19).
AI models
The Multi-Model Pipeline
No single AI model is good at everything. We use three specialised models in sequence, each doing what it does best. The deterministic scoring engine runs in parallel — AI models cannot override the scores it produces.
All models run with structured output constraints and temperature settings tuned for accuracy. Token budgets are enforced to ensure substance over verbosity.
Compliance
The Regulatory Engine
Regulatory mapping is powered by a live intelligence engine that performs real-time research at report generation time. This ensures your report reflects current regulatory developments rather than static rules that may be months out of date.
The engine maps your jurisdiction and industry to applicable regulations, including:
- EU AI Act — risk classification, prohibited uses, and compliance timelines for EU-based or EU-facing organisations
- GDPR / UK GDPR — automated decision-making, profiling, and data subject rights relevant to AI systems
- Sector-specific requirements — financial services (FCA, MiFID II), healthcare (MDR, MHRA), and others where applicable
- Emerging frameworks — NIST AI Risk Management Framework, US AI Executive Order provisions
This is not legal advice. Regulatory information is provided for awareness and planning purposes only. Regulations change frequently and their application depends on specific facts. Always consult qualified legal counsel before making compliance decisions.
Limitations
What This Assessment Cannot Tell You
- It is self-reported. Scores are based entirely on your answers. If answers are incomplete, optimistic, or inconsistent, the report reflects that. The consistency checker flags obvious contradictions but cannot verify claims.
- It is not a technical audit. It assesses organisational readiness, not technical infrastructure. It does not examine your actual data, systems, or code.
- It is a point-in-time snapshot. AI readiness changes. We recommend reassessing every 6–12 months, or after significant organisational changes.
- Industry benchmarks require sufficient data. Percentile comparisons require a minimum number of assessments from comparable organisations. In sectors with fewer responses, benchmarks are suppressed.
- Regulatory information is for awareness only. See the note above. Do not rely on this assessment for legal compliance decisions.
- AI models can make errors. The narrative sections are generated by AI and may occasionally contain errors or miss nuances specific to your organisation. The deterministic scoring engine is not subject to this limitation.
Evidence
Evidence Base
The scoring anchors, dimension weights, and maturity descriptors were developed using the following research and frameworks:
- McKinsey Global AI Survey (2023–2024) AI adoption rates, organisational barriers, and maturity patterns across industries and geographies.
- MIT Sloan Management Review — AI & Business Strategy Framework for assessing strategic AI readiness and the relationship between AI investment and business outcomes.
- Gartner AI Maturity Model Maturity level definitions and dimension weighting principles.
- NIST AI Risk Management Framework (AI RMF 1.0) Governance and accountability dimension anchors; risk management practices.
- EU AI Act (Regulation 2024/1689) Risk classification tiers, prohibited uses, and compliance obligations for high-risk AI systems.
- OECD Principles on AI Responsible AI principles used to anchor the governance and people dimensions.
- Deloitte State of AI in the Enterprise (2023) People and skills benchmarks; AI literacy measures.
- World Economic Forum — AI Governance Alliance Cross-industry governance practices and regulatory readiness benchmarks.
- Stanford HAI AI Index Report (2024) Industry adoption data used to calibrate industry context narratives.
Transparency
AI Capabilities
Three specialised AI capabilities work together to produce your report, each applied to the stage it handles best.
Deterministic components (score anchors, consistency checker) are not AI — they are rule-based systems that produce the same output for the same input, every time.