Resource

AI Governance Checklist

Six areas your organization needs to address before and after deploying AI. Use this as a pre-deployment review and an ongoing governance audit. No sign-up required.

How to use this checklist

Work through each section before any AI workflow goes live, and return to it quarterly as a governance audit. Items marked with a shield icon are high-priority items that should not be skipped regardless of scope or timeline. Items marked with a clock icon are ongoing responsibilities, not one-time tasks.

01

Access Controls

Who can access, configure, and operate AI tools and workflows

  • Role-based access is defined and enforced Each AI tool has a documented list of who can use it, who can configure it, and who can access the outputs. Access is assigned by role, not by individual request.
  • Admin credentials are not shared API keys, admin logins, and service account credentials are assigned to named individuals or service accounts, not shared across the team.
  • Offboarding removes AI access immediately When a team member leaves or changes roles, their access to AI tools is revoked within 24 hours. This is part of the standard offboarding process, not an afterthought.
  • Access is reviewed quarterly A quarterly review identifies any accounts with access that is no longer appropriate given role changes, team restructuring, or project completion.
  • Least-privilege principle is applied Users have access to only the AI capabilities required for their specific role. General-purpose AI access is not granted by default to all employees.
02

Audit and Logging

Records of what the AI did, when, and what was reviewed

  • AI outputs are logged before human review Every AI-generated output is captured in a log before a human reviews or approves it. This creates an audit trail of what the AI produced versus what was ultimately used.
  • Human review decisions are recorded When a human reviews AI output and approves, modifies, or rejects it, that decision is recorded. The reviewer's identity and timestamp are captured.
  • Logs are retained per your policy Log retention follows the organization's document retention policy. For regulated industries, this is typically a minimum of three to seven years. Logs are stored in a system separate from the AI tool itself.
  • Anomalous outputs trigger review There is a defined threshold for what constitutes an anomalous AI output (unusual length, flagged keywords, confidence below a threshold), and anomalous outputs are routed to additional review automatically.
  • Logs are sampled and reviewed monthly A sample of AI outputs and review decisions is examined monthly by the workflow owner to identify quality degradation, pattern shifts, or review process breakdowns.
03

Data Governance

What data flows into AI systems and what protection it receives

  • Data classification determines AI eligibility Your organization has a data classification scheme (e.g., public, internal, confidential, restricted), and a policy exists defining which classifications may be processed by which AI tools.
  • PII and sensitive data are not sent to non-compliant tools Personally identifiable information, protected health information, financial account data, and other sensitive categories are never submitted to AI tools that have not been approved for that data type.
  • Model training data exclusions are confirmed with each vendor You have confirmed in writing (in the vendor contract or data processing agreement) that data submitted to the AI tool is not used to train or improve the model. This applies especially to consumer-tier tools with default training data opt-ins.
  • Data flows are documented and reviewed annually A data flow diagram exists showing what data enters each AI workflow, where it is processed, where output is stored, and who has access at each stage. This is reviewed when workflows change and at minimum annually.
  • Cross-border data transfers are addressed for LATAM operations For operations involving data from Latin American countries (Mexico, Colombia, Brazil, Chile, and others), the applicable data protection framework has been reviewed and transfer mechanisms are in place where required.
04

Human Oversight

Ensuring humans remain in the decision loop

  • No AI output goes directly to an external party without review AI-generated communications, documents, reports, or decisions that will be seen by customers, partners, regulators, or patients must pass through a human review and approval step before delivery.
  • Review assignments are explicit, not assumed There is a named person or role responsible for reviewing each AI workflow's output. "Someone will catch it" is not a sufficient control. Review responsibilities are documented and included in job descriptions or workflow SOPs.
  • Review queue SLAs are monitored If AI output requires human review before use, the time-in-queue is tracked. Outputs that exceed the SLA (e.g., more than 24 hours without review) generate an escalation alert.
  • Reviewers are trained, not just assigned The people reviewing AI output have received training on what errors to look for, how to flag quality issues, and what to do when output falls outside expected parameters. This training is documented and renewed when workflows change.
  • Override rate is tracked as a quality signal When reviewers modify or reject AI output, that action is counted. A rising override rate is an early warning of model degradation, prompt drift, or a changed business process that the AI workflow has not adapted to.
05

Incident Response

What happens when an AI workflow produces a harmful or incorrect output

  • An AI incident is defined Your organization has a written definition of what constitutes an AI incident: harmful output, data exposure, significant error affecting a customer or business process, or regulatory-relevant failure. Without a definition, incidents cannot be reported or tracked.
  • There is a workflow owner for every deployed AI workflow Each AI workflow has a named owner responsible for its performance. When an incident occurs, the owner is the first responder. Ownership is not shared between multiple people without a clear escalation path.
  • The pause-and-review protocol is documented If an AI workflow produces an incident, the team knows how to pause it immediately, what to do with in-flight requests, how to notify affected parties, and what is required before the workflow is restarted.
  • Incidents are logged and reviewed, not just resolved Every AI incident is logged with the date, workflow involved, nature of the error, parties affected, resolution taken, and root cause. These logs are reviewed at the quarterly governance meeting to identify systemic issues.
  • Regulatory notification requirements are understood For regulated industries, you have confirmed whether an AI incident involving customer data, patient data, or financial data triggers a notification obligation to regulators or affected individuals. This confirmation exists before an incident occurs, not after.
06

Vendor Assessment

Evaluating AI vendors before deployment and monitoring them after

  • Every AI vendor is formally assessed before deployment No AI tool is deployed into a business workflow based on a demo or colleague recommendation alone. A formal assessment covers data handling, compliance certifications, incident history, API stability, and pricing structure.
  • BAAs are in place where required For any AI tool processing protected health information, a Business Associate Agreement is signed before the tool is activated. BAA status is tracked in your vendor registry, not stored in email threads.
  • API deprecation and model update policies are understood Before deploying a workflow that depends on a vendor API, you have confirmed the vendor's policy on deprecation notice periods, model update communication, and backward compatibility. You know how much notice you will receive before a breaking change.
  • Vendor risk is reviewed annually Your AI vendor list is reviewed at least annually. Reviews check for changes in data handling policy, ownership changes (acquisitions), pricing model changes, and updates to the vendor's own AI governance and security certifications.
  • Vendor changelog communications are monitored Someone on your team is subscribed to each AI vendor's changelog, status page, and major announcement communications. Model updates and API changes do not arrive as surprises.

Need help working through this checklist?

This checklist reflects the governance structure we build alongside every AI workflow we deploy. If you work through it and find gaps, that is a useful diagnostic. If you want help closing those gaps, that is what we do.

We offer a dedicated AI Governance Assessment engagement that reviews your current AI deployments against this framework and produces a prioritized remediation plan. Most teams complete the assessment in three to four weeks.

Book a Governance Assessment See Our Governance Framework

AI governance is a design problem, not a policy problem

The organizations that get AI governance right build it into their workflows from day one. We help you design for it from the start, not retrofit it after something goes wrong.