AI Governance Toolkit

8 practical templates for mid-market companies implementing AI with confidence. From policy frameworks to vendor assessments.

2026 Edition | For internal use

How to Use This Toolkit

Each template below is designed to be used independently or as part of a complete governance programme. Start with Template 1 (AI Use Policy) and Template 5 (Risk Assessment) as your foundation, then layer in the others as your AI maturity grows.

Every template includes the structure, key fields, and guidance notes. Copy them into your own documents, adapt the language to your organisation, and fill in the specifics for your context.

01 AI Acceptable Use Policy 02 AI Project Intake Form 03 Data Readiness Checklist 04 Model Selection Scorecard 05 AI Risk Assessment Matrix 06 Vendor / Third-Party AI Assessment 07 AI Ethics Review Checklist 08 AI Governance Committee Charter
Template 01
AI Acceptable Use Policy
Establishes organisation-wide rules for how employees may and may not use AI tools.

Policy sections

SectionWhat to include
Purpose and scopeWhy this policy exists, who it applies to (all employees, contractors, vendors)
Approved AI toolsList of sanctioned tools (e.g. internal copilot, approved ChatGPT tier). Update quarterly.
Prohibited usesNo customer PII in public LLMs. No automated decisions on hiring, credit, or insurance without human review.
Data classification rulesWhat data tiers can be used with which AI tools (public, internal, confidential, restricted)
Output review requirementsAll AI-generated content must be reviewed by a qualified human before external use
Incident reportingHow to report AI misuse, data leaks, or unexpected outputs
Training requirementsMandatory AI literacy training before access is granted
Review cadencePolicy reviewed every 6 months or when regulations change
Implementation tip Start with a simple one-page version. A short policy that people actually read is better than a 30-page document nobody opens.
Template 02
AI Project Intake Form
Standardised form for evaluating new AI use cases before they receive investment or resources.

Intake fields

FieldPurpose
Project name and sponsorClear ownership and executive accountability
Business problem statementWhat problem does this solve? What happens if we do nothing?
Proposed AI approachBuild vs buy vs partner. Model type. Integration points.
Data requirementsWhat data is needed? Where does it live? Is it clean?
Success metricsHow will we measure ROI? What does "good" look like at 30/60/90 days?
Risk classificationHigh/Medium/Low based on data sensitivity, customer impact, regulatory exposure
Regulatory considerationsWhich regulations apply? Do we need legal review?
Resource requirementsPeople, compute, budget, timeline
Exit criteriaUnder what conditions do we stop the project?
Implementation tip Route all AI projects through this form, even "quick experiments." Shadow AI projects are the biggest governance risk for mid-market firms.
Template 03
Data Readiness Checklist
Assess whether your data infrastructure can support the AI use case before committing resources.

Checklist

Implementation tip Most AI projects fail because of data problems, not model problems. If more than 3 items above are unchecked, fix data foundations before starting the AI project.
Template 04
Model Selection Scorecard
Structured comparison of AI model options (build, buy, open-source) against your specific requirements.

Scoring criteria

CriterionWeightOption AOption BOption C
Task performance accuracy25%
Total cost of ownership (3-year)20%
Data privacy and residency compliance15%
Integration complexity10%
Latency and throughput10%
Vendor lock-in risk5%
Explainability and auditability5%
Team capability to maintain5%
Scalability headroom5%
Weighted total100%
Implementation tip Score each option 1-5 per criterion, multiply by weight. But also ask: "Can our team actually operate this in production?" The best model is useless if nobody can maintain it.
Template 05
AI Risk Assessment Matrix
Identify, classify, and plan mitigations for risks in each AI initiative.

Risk categories to assess

Risk category
Example risks
Typical severity
Data and privacy
PII exposure, consent gaps, cross-border transfer violations
High
Bias and fairness
Discriminatory outputs, underrepresented groups in training data
High
Security
Prompt injection, model theft, adversarial attacks, supply chain compromise
High
Operational
Model drift, hallucinations in production, dependency on single vendor
Medium
Regulatory
Non-compliance with EU AI Act, PIPEDA, sector-specific rules
High
Reputational
Public-facing AI errors, customer trust erosion
Medium
Financial
Cost overruns, failed ROI, unexpected compute scaling
Medium
People
Skills gaps, change resistance, key person dependency
Low-Medium

For each identified risk, document

FieldDescription
Risk descriptionSpecific, concrete description of what could go wrong
LikelihoodHigh / Medium / Low
ImpactHigh / Medium / Low
Current controlsWhat mitigations are already in place?
Residual riskRisk level after current controls
Additional mitigationsWhat else needs to be done?
OwnerWho is accountable for this risk?
Review dateWhen will this risk be reassessed?
Template 06
Vendor / Third-Party AI Assessment
Due diligence questionnaire for evaluating AI vendors and third-party model providers.

Assessment areas

AreaKey questions
Data handlingWhere is data processed? Is it used for model training? Can you opt out? What happens on contract termination?
Security postureSOC 2 Type II? ISO 27001? Encryption at rest and in transit? Penetration testing cadence?
Model transparencyCan you audit model decisions? Is there documentation on training data sources? Bias testing results available?
Regulatory complianceGDPR/PIPEDA compliance demonstrated? EU AI Act readiness? Sector-specific certifications?
SLA and reliabilityUptime guarantees? Latency commitments? What happens during outages? Disaster recovery plan?
Lock-in and portabilityCan you export your data and fine-tuned models? What are exit terms? Is there API compatibility with alternatives?
Liability and insuranceWho is liable for AI errors? Do they carry AI-specific insurance? Indemnification terms?
Roadmap alignmentProduct roadmap shared? How do they handle breaking changes? Customer advisory board?
Implementation tip Send this questionnaire before signing any contract. If a vendor cannot answer these questions clearly, that tells you something important about their maturity.
Template 07
AI Ethics Review Checklist
Structured review to ensure AI systems meet ethical standards before deployment.

Pre-deployment ethics review

Red lines (do not proceed if any apply)

Template 08
AI Governance Committee Charter
Establishes the structure, mandate, and operating rhythm for your AI governance body.

Charter sections

SectionContent
MissionEnsure AI is deployed responsibly, ethically, and in alignment with business strategy and regulatory requirements
ScopeAll AI and ML initiatives, including third-party AI tools used by any department
MembershipCTO/CIO (chair), Legal/Compliance, Data/Analytics lead, Business unit representative, HR, Risk, External advisor (optional)
Decision authorityApprove/reject high-risk AI projects. Mandate risk mitigations. Set policy. Escalate to board.
Meeting cadenceMonthly for the first year, then quarterly. Emergency sessions as needed.
ReportingQuarterly report to executive team. Annual report to board. Dashboard of active AI projects and risk status.
BudgetGovernance tooling, external audits, training programmes, legal counsel
ReviewCharter reviewed annually or when regulatory landscape changes materially

Standing agenda items

  1. New AI project intake review (approve/reject/request more information)
  2. Active project risk status updates
  3. Incident review (any AI-related incidents since last meeting)
  4. Regulatory and landscape updates
  5. Policy review and updates
  6. Training and capability building progress
Implementation tip For companies under 100 employees, this does not need to be a formal standing committee. A monthly 30-minute check-in between the CTO, a business leader, and your legal contact is sufficient to start.

Getting Started

You do not need to implement all 8 templates at once. Here is a recommended sequencing based on your AI maturity:

StageTemplates to implementTimeline
Foundation (start here)01 - AI Use Policy, 05 - Risk AssessmentWeek 1-2
Structure02 - Project Intake, 08 - Governance CharterWeek 3-4
Depth03 - Data Readiness, 04 - Model ScorecardMonth 2
Maturity06 - Vendor Assessment, 07 - Ethics ReviewMonth 3
Need help implementing these? Muuvment Labs helps mid-market companies go from templates to working governance programmes. Start with our free AI Readiness Assessment at labs.muuvment.co, or book a strategy call to discuss your specific situation.