Most mid-market companies starting an AI programme make the same mistake: they skip the diagnostic. They pick a use case, select a vendor, and start spending. The readiness gaps surface later, expensively, once the pilot has already failed or the compliance team has raised its hand.
We built the Muuvment AI Readiness Assessment to solve that problem. It is a free, multi-agent diagnostic that researches your organisation, scores your AI readiness across six weighted dimensions, maps your regulatory obligations, and produces a sequenced 90-day roadmap. It takes 10 to 15 minutes. It requires no account, no sales call, and no obligation.
This article explains what the assessment does, why we built it this way, and what you get when you complete it.
Why We Built It
We wanted something genuinely useful: a tool that treats the question of AI readiness with the same rigour a good implementation partner would, but at zero cost and in a fraction of the time. The constraint forced discipline. Every question had to earn its place. Every output had to be actionable.
The assessment is also the foundation of how we work with clients. When a company approaches Muuvment Labs, the assessment gives us a shared, structured read on their position before the first conversation. It replaces weeks of intake with a common language and a set of scores we can both work from.
How It Works: The Multi-Agent Pipeline
The pipeline runs three phases in sequence.
Company research via Perplexity Sonar
Before scoring begins, Perplexity Sonar searches your company's public footprint: website, job postings, press releases, LinkedIn, regulatory filings where applicable. This gives the scoring model context it cannot derive from your answers alone, and it flags when your stated position differs meaningfully from your public signal.
Readiness scoring via Claude
Claude scores your responses across 6 dimensions using deterministic anchors. Each answer maps to a maturity level. Consistency checks flag contradictions. Industry-specific benchmarks adjust the score for your sector. Financial Services organisations receive additional questions covering model risk management, algorithmic fairness, and regulatory explainability obligations specific to that context.
Roadmap generation via Claude
A second Claude pass synthesises your scores, your regulatory picture, and your stated priorities into a sequenced 90-day roadmap. The roadmap identifies your highest-leverage first moves, the gaps that need to close before you can proceed, and the quick wins available within your current posture. It is personalised to your company name, jurisdiction, and industry.
The multi-agent design exists to prevent the most common failure mode of AI-generated diagnostics: plausible-sounding output that applies equally to every company. Each stage produces structured data passed to the next. The roadmap cannot be generated until the scores exist. The scores cannot be validated without the research context. The architecture enforces rigour.
The 6 Scoring Dimensions
Data Readiness
The most heavily weighted dimension because it is the most common failure point. Covers data quality, accessibility, labelling, lineage, and whether the data you have is actually the data the use case requires. A company with a great strategy and poor data will fail.
Governance Capacity
Covers decision-making authority over AI, policy existence and enforcement, risk appetite documentation, and whether there is a named owner for AI outcomes. Governance gaps are the second most common cause of production failure and the most expensive to fix retroactively.
Strategic Alignment
Assesses whether AI investment connects to a defined business problem with measurable outcomes, whether leadership has a coherent point of view on AI, and whether the organisation has a sequenced strategy rather than a collection of disconnected pilots.
People and Culture
Evaluates AI literacy across the organisation, the presence of internal champions, change management capability, and whether the culture supports the experimentation and iteration that AI development requires. Underweighted by most frameworks. Overrepresented in postmortems.
Execution Capability
Covers technical infrastructure, vendor management maturity, the ability to move from pilot to production, and whether the organisation has completed any prior AI or data projects. Prior execution experience is the strongest single predictor of future success.
Regulatory Position
Maps your jurisdiction and industry against applicable AI regulations, flags high-risk obligation areas, and assesses your current compliance posture. Weighted at 5% in the overall score but surfaced prominently in the report because non-compliance is a binary blocker, not a gradient risk.
What You Get
How to Use It
Enter your company details
Company name, website URL, industry, country, and number of employees. The website URL triggers the Perplexity research pass. The industry and country determine which regulatory questions and benchmarks apply.
Complete 19 questions
Questions are structured as multiple-choice with descriptive options: you select the statement that best describes your current state, not an aspirational position. Financial Services companies receive additional sector-specific questions covering model risk management and regulatory explainability. Honest answers produce a more useful report. The system flags internal inconsistencies.
Review your results in the browser
Your full report appears immediately. Work through the dimension scores first, then the gap analysis, then the 90-day roadmap. The regulatory picture is surfaced separately because it requires a different kind of attention. Use the live AI advisor to ask follow-up questions about your specific results.
Act on the highest-priority gap
The roadmap identifies one high-leverage action to take first. Most companies benefit from starting there, building momentum and proof of capability before attempting anything more complex. If you want a second opinion on the roadmap or help executing the first step, book a call. There is no sales process on the other end, just a conversation about your results.
The assessment is not a lead generation tool. We built it because bad AI starts cause lasting damage, and the diagnostic phase is where that damage is most preventable.
If your results show you are not ready, that is useful information. It is better to know now than after the budget is spent.
The assessment is free, requires no account, and produces a full personalised report in the browser the moment you complete the final question. It takes 10 to 15 minutes.
It is built in production and runs on the same multi-agent infrastructure we use to build AI systems for our clients. It is, in itself, a working demonstration of what a well-architected AI pipeline can do.