Most mid-market companies starting an AI programme make the same mistake: they skip the diagnostic. They pick a use case, select a vendor, and start spending. The readiness gaps surface later, expensively, once the pilot has already failed or the compliance team has raised its hand.

We built the Muuvment AI Readiness Assessment to solve that problem. It is a free, multi-agent diagnostic that researches your organisation, scores your AI readiness across six weighted dimensions, maps your regulatory obligations, and produces a sequenced 90-day roadmap. It takes 10 to 15 minutes. It requires no account, no sales call, and no obligation.

This article explains what the assessment does, why we built it this way, and what you get when you complete it.

I

Why We Built It

The diagnostic tools available to mid-market companies before starting an AI programme were, until recently, almost entirely useless: short quizzes with generic outputs, vendor-sponsored assessments designed to create pipeline, or consulting engagements costing upwards of $40,000 with a 6-week timeline. None of these serve a company that needs a clear-eyed read on their position before committing resources.

We wanted something genuinely useful: a tool that treats the question of AI readiness with the same rigour a good implementation partner would, but at zero cost and in a fraction of the time. The constraint forced discipline. Every question had to earn its place. Every output had to be actionable.

READY
The question is not whether your company should use AI. The question is whether you are ready to use it well, and if not, what specifically needs to change first.

The assessment is also the foundation of how we work with clients. When a company approaches Muuvment Labs, the assessment gives us a shared, structured read on their position before the first conversation. It replaces weeks of intake with a common language and a set of scores we can both work from.

II

How It Works: The Multi-Agent Pipeline

The assessment does not rely on self-report alone. A multi-agent pipeline cross-references your answers against your organisation's public footprint, scores you against deterministic anchors, and produces structured outputs no single model could generate reliably on its own.

The pipeline runs three phases in sequence.

1

Company research via Perplexity Sonar

Before scoring begins, Perplexity Sonar searches your company's public footprint: website, job postings, press releases, LinkedIn, regulatory filings where applicable. This gives the scoring model context it cannot derive from your answers alone, and it flags when your stated position differs meaningfully from your public signal.

2

Readiness scoring via Claude

Claude scores your responses across 6 dimensions using deterministic anchors. Each answer maps to a maturity level. Consistency checks flag contradictions. Industry-specific benchmarks adjust the score for your sector. Financial Services organisations receive additional questions covering model risk management, algorithmic fairness, and regulatory explainability obligations specific to that context.

3

Roadmap generation via Claude

A second Claude pass synthesises your scores, your regulatory picture, and your stated priorities into a sequenced 90-day roadmap. The roadmap identifies your highest-leverage first moves, the gaps that need to close before you can proceed, and the quick wins available within your current posture. It is personalised to your company name, jurisdiction, and industry.

Three models. One pipeline. Zero hallucinated generics.

The multi-agent design exists to prevent the most common failure mode of AI-generated diagnostics: plausible-sounding output that applies equally to every company. Each stage produces structured data passed to the next. The roadmap cannot be generated until the scores exist. The scores cannot be validated without the research context. The architecture enforces rigour.

III

The 6 Scoring Dimensions

Readiness is not a single number. It is a profile across six dimensions, each weighted by its practical impact on whether an AI programme succeeds in production. The weights reflect eight years of implementation experience across mid-market deployments.
25%

Data Readiness

The most heavily weighted dimension because it is the most common failure point. Covers data quality, accessibility, labelling, lineage, and whether the data you have is actually the data the use case requires. A company with a great strategy and poor data will fail.

20%

Governance Capacity

Covers decision-making authority over AI, policy existence and enforcement, risk appetite documentation, and whether there is a named owner for AI outcomes. Governance gaps are the second most common cause of production failure and the most expensive to fix retroactively.

20%

Strategic Alignment

Assesses whether AI investment connects to a defined business problem with measurable outcomes, whether leadership has a coherent point of view on AI, and whether the organisation has a sequenced strategy rather than a collection of disconnected pilots.

15%

People and Culture

Evaluates AI literacy across the organisation, the presence of internal champions, change management capability, and whether the culture supports the experimentation and iteration that AI development requires. Underweighted by most frameworks. Overrepresented in postmortems.

15%

Execution Capability

Covers technical infrastructure, vendor management maturity, the ability to move from pilot to production, and whether the organisation has completed any prior AI or data projects. Prior execution experience is the strongest single predictor of future success.

5%

Regulatory Position

Maps your jurisdiction and industry against applicable AI regulations, flags high-risk obligation areas, and assesses your current compliance posture. Weighted at 5% in the overall score but surfaced prominently in the report because non-compliance is a binary blocker, not a gradient risk.

IV

What You Get

The assessment produces four outputs, delivered instantly in the browser at the moment you complete the final question. No email required to see your results. An optional email address lets you receive a formatted PDF copy of the full report.
Score
Your overall AI maturity score, 0 to 100, with dimension-level breakdown and peer context.
Gaps
Ranked list of critical gaps, ordered by their likely impact on your AI programme outcomes.
Roadmap
A sequenced 90-day action plan with specific first moves, dependencies, and quick wins.
Regs
Regulatory picture: which AI regulations apply to your jurisdiction and industry, and where you have exposure.
Chat
A live AI advisor that has read your full responses and can answer questions about your specific results.
Call
An optional booking link to walk through your results with a Muuvment Labs implementation specialist.
V

How to Use It

The assessment is designed for mid-market companies: 50 to 2,000 employees, considering or actively investing in AI. It works best when completed by someone with a clear view across the organisation, typically a CEO, CTO, COO, or a senior leader with cross-functional visibility.
1

Enter your company details

Company name, website URL, industry, country, and number of employees. The website URL triggers the Perplexity research pass. The industry and country determine which regulatory questions and benchmarks apply.

2

Complete 19 questions

Questions are structured as multiple-choice with descriptive options: you select the statement that best describes your current state, not an aspirational position. Financial Services companies receive additional sector-specific questions covering model risk management and regulatory explainability. Honest answers produce a more useful report. The system flags internal inconsistencies.

3

Review your results in the browser

Your full report appears immediately. Work through the dimension scores first, then the gap analysis, then the 90-day roadmap. The regulatory picture is surfaced separately because it requires a different kind of attention. Use the live AI advisor to ask follow-up questions about your specific results.

4

Act on the highest-priority gap

The roadmap identifies one high-leverage action to take first. Most companies benefit from starting there, building momentum and proof of capability before attempting anything more complex. If you want a second opinion on the roadmap or help executing the first step, book a call. There is no sales process on the other end, just a conversation about your results.

READY

The assessment is not a lead generation tool. We built it because bad AI starts cause lasting damage, and the diagnostic phase is where that damage is most preventable.

If your results show you are not ready, that is useful information. It is better to know now than after the budget is spent.


The assessment is free, requires no account, and produces a full personalised report in the browser the moment you complete the final question. It takes 10 to 15 minutes.

It is built in production and runs on the same multi-agent infrastructure we use to build AI systems for our clients. It is, in itself, a working demonstration of what a well-architected AI pipeline can do.