How to Use This Toolkit
Each template below is designed to be used independently or as part of a complete governance programme. Start with Template 1 (AI Use Policy) and Template 5 (Risk Assessment) as your foundation, then layer in the others as your AI maturity grows.
Every template includes the structure, key fields, and guidance notes. Copy them into your own documents, adapt the language to your organisation, and fill in the specifics for your context.
Template 01
AI Acceptable Use Policy
Establishes organisation-wide rules for how employees may and may not use AI tools.
Policy sections
| Section | What to include |
| Purpose and scope | Why this policy exists, who it applies to (all employees, contractors, vendors) |
| Approved AI tools | List of sanctioned tools (e.g. internal copilot, approved ChatGPT tier). Update quarterly. |
| Prohibited uses | No customer PII in public LLMs. No automated decisions on hiring, credit, or insurance without human review. |
| Data classification rules | What data tiers can be used with which AI tools (public, internal, confidential, restricted) |
| Output review requirements | All AI-generated content must be reviewed by a qualified human before external use |
| Incident reporting | How to report AI misuse, data leaks, or unexpected outputs |
| Training requirements | Mandatory AI literacy training before access is granted |
| Review cadence | Policy reviewed every 6 months or when regulations change |
Implementation tip
Start with a simple one-page version. A short policy that people actually read is better than a 30-page document nobody opens.
Template 02
AI Project Intake Form
Standardised form for evaluating new AI use cases before they receive investment or resources.
Intake fields
| Field | Purpose |
| Project name and sponsor | Clear ownership and executive accountability |
| Business problem statement | What problem does this solve? What happens if we do nothing? |
| Proposed AI approach | Build vs buy vs partner. Model type. Integration points. |
| Data requirements | What data is needed? Where does it live? Is it clean? |
| Success metrics | How will we measure ROI? What does "good" look like at 30/60/90 days? |
| Risk classification | High/Medium/Low based on data sensitivity, customer impact, regulatory exposure |
| Regulatory considerations | Which regulations apply? Do we need legal review? |
| Resource requirements | People, compute, budget, timeline |
| Exit criteria | Under what conditions do we stop the project? |
Implementation tip
Route all AI projects through this form, even "quick experiments." Shadow AI projects are the biggest governance risk for mid-market firms.
Template 03
Data Readiness Checklist
Assess whether your data infrastructure can support the AI use case before committing resources.
Checklist
- Data sources identified and documented
- Data ownership and stewardship assigned
- Data quality assessed (completeness, accuracy, consistency, timeliness)
- Data lineage documented from source to consumption
- PII and sensitive data fields identified and classified
- Data access controls and permissions reviewed
- Data retention and deletion policies in place
- Historical data volume sufficient for training/fine-tuning (if applicable)
- Data pipeline reliability verified (uptime, latency, error rates)
- Bias assessment completed on training data
- Cross-border data transfer requirements reviewed
- Data sharing agreements in place with third parties
- Backup and recovery procedures tested
- Data catalogue or inventory up to date
Implementation tip
Most AI projects fail because of data problems, not model problems. If more than 3 items above are unchecked, fix data foundations before starting the AI project.
Template 04
Model Selection Scorecard
Structured comparison of AI model options (build, buy, open-source) against your specific requirements.
Scoring criteria
| Criterion | Weight | Option A | Option B | Option C |
| Task performance accuracy | 25% | | | |
| Total cost of ownership (3-year) | 20% | | | |
| Data privacy and residency compliance | 15% | | | |
| Integration complexity | 10% | | | |
| Latency and throughput | 10% | | | |
| Vendor lock-in risk | 5% | | | |
| Explainability and auditability | 5% | | | |
| Team capability to maintain | 5% | | | |
| Scalability headroom | 5% | | | |
| Weighted total | 100% | | | |
Implementation tip
Score each option 1-5 per criterion, multiply by weight. But also ask: "Can our team actually operate this in production?" The best model is useless if nobody can maintain it.
Template 05
AI Risk Assessment Matrix
Identify, classify, and plan mitigations for risks in each AI initiative.
Risk categories to assess
Data and privacy
PII exposure, consent gaps, cross-border transfer violations
High
Bias and fairness
Discriminatory outputs, underrepresented groups in training data
High
Security
Prompt injection, model theft, adversarial attacks, supply chain compromise
High
Operational
Model drift, hallucinations in production, dependency on single vendor
Medium
Regulatory
Non-compliance with EU AI Act, PIPEDA, sector-specific rules
High
Reputational
Public-facing AI errors, customer trust erosion
Medium
Financial
Cost overruns, failed ROI, unexpected compute scaling
Medium
People
Skills gaps, change resistance, key person dependency
Low-Medium
For each identified risk, document
| Field | Description |
| Risk description | Specific, concrete description of what could go wrong |
| Likelihood | High / Medium / Low |
| Impact | High / Medium / Low |
| Current controls | What mitigations are already in place? |
| Residual risk | Risk level after current controls |
| Additional mitigations | What else needs to be done? |
| Owner | Who is accountable for this risk? |
| Review date | When will this risk be reassessed? |
Template 06
Vendor / Third-Party AI Assessment
Due diligence questionnaire for evaluating AI vendors and third-party model providers.
Assessment areas
| Area | Key questions |
| Data handling | Where is data processed? Is it used for model training? Can you opt out? What happens on contract termination? |
| Security posture | SOC 2 Type II? ISO 27001? Encryption at rest and in transit? Penetration testing cadence? |
| Model transparency | Can you audit model decisions? Is there documentation on training data sources? Bias testing results available? |
| Regulatory compliance | GDPR/PIPEDA compliance demonstrated? EU AI Act readiness? Sector-specific certifications? |
| SLA and reliability | Uptime guarantees? Latency commitments? What happens during outages? Disaster recovery plan? |
| Lock-in and portability | Can you export your data and fine-tuned models? What are exit terms? Is there API compatibility with alternatives? |
| Liability and insurance | Who is liable for AI errors? Do they carry AI-specific insurance? Indemnification terms? |
| Roadmap alignment | Product roadmap shared? How do they handle breaking changes? Customer advisory board? |
Implementation tip
Send this questionnaire before signing any contract. If a vendor cannot answer these questions clearly, that tells you something important about their maturity.
Template 07
AI Ethics Review Checklist
Structured review to ensure AI systems meet ethical standards before deployment.
Pre-deployment ethics review
- Intended use case clearly documented and bounded
- Potential for harm assessed across all affected groups
- Bias testing completed on representative data samples
- Fairness metrics defined and measured (demographic parity, equal opportunity, etc.)
- Transparency: users informed they are interacting with AI
- Explainability: decisions can be explained in plain language
- Human oversight: escalation path to human decision-maker exists
- Consent: data subjects informed and consent obtained where required
- Proportionality: AI use is proportionate to the task (not using a sledgehammer for a nail)
- Reversibility: AI decisions can be overridden or reversed
- Monitoring plan: ongoing performance and fairness monitoring in place
- Feedback mechanism: users can report concerns or errors
- Documentation: model card or system documentation completed
- Legal review: sign-off from legal/compliance on regulatory requirements
Red lines (do not proceed if any apply)
- System makes autonomous decisions on employment, credit, insurance, or criminal justice without human review
- Training data contains known biases that cannot be mitigated
- No mechanism exists to explain or audit decisions
- System processes children's data without specific safeguards
- Regulatory requirements are unmet or unclear
Template 08
AI Governance Committee Charter
Establishes the structure, mandate, and operating rhythm for your AI governance body.
Charter sections
| Section | Content |
| Mission | Ensure AI is deployed responsibly, ethically, and in alignment with business strategy and regulatory requirements |
| Scope | All AI and ML initiatives, including third-party AI tools used by any department |
| Membership | CTO/CIO (chair), Legal/Compliance, Data/Analytics lead, Business unit representative, HR, Risk, External advisor (optional) |
| Decision authority | Approve/reject high-risk AI projects. Mandate risk mitigations. Set policy. Escalate to board. |
| Meeting cadence | Monthly for the first year, then quarterly. Emergency sessions as needed. |
| Reporting | Quarterly report to executive team. Annual report to board. Dashboard of active AI projects and risk status. |
| Budget | Governance tooling, external audits, training programmes, legal counsel |
| Review | Charter reviewed annually or when regulatory landscape changes materially |
Standing agenda items
- New AI project intake review (approve/reject/request more information)
- Active project risk status updates
- Incident review (any AI-related incidents since last meeting)
- Regulatory and landscape updates
- Policy review and updates
- Training and capability building progress
Implementation tip
For companies under 100 employees, this does not need to be a formal standing committee. A monthly 30-minute check-in between the CTO, a business leader, and your legal contact is sufficient to start.
Getting Started
You do not need to implement all 8 templates at once. Here is a recommended sequencing based on your AI maturity:
| Stage | Templates to implement | Timeline |
| Foundation (start here) | 01 - AI Use Policy, 05 - Risk Assessment | Week 1-2 |
| Structure | 02 - Project Intake, 08 - Governance Charter | Week 3-4 |
| Depth | 03 - Data Readiness, 04 - Model Scorecard | Month 2 |
| Maturity | 06 - Vendor Assessment, 07 - Ethics Review | Month 3 |
Need help implementing these?
Muuvment Labs helps mid-market companies go from templates to working governance programmes. Start with our free AI Readiness Assessment at labs.muuvment.co, or book a strategy call to discuss your specific situation.