When we conduct an AI governance audit at a mid-size organization, the findings follow a predictable pattern. Here are the six categories where gaps appear consistently, and what the fix looks like for each.
1. Data inventory
The finding: the organization cannot produce a complete list of where AI-processed data lives. Customer data enters ChatGPT through the support team. Financial data enters a reporting AI through the finance team. HR data enters an AI screening tool through recruitment. Nobody has mapped this.
The fix: a simple data flow map. Not an enterprise data governance platform. A document that lists: what AI tools are in use, what data enters each tool, where that data goes, and who is responsible. This can be created in a single afternoon with interviews across department heads.
2. Tool approval
The finding: between 40 and 70 percent of AI tools in use at mid-size companies were never officially approved. Teams adopted them independently. The tools work well. Nobody asked whether they should be using them.
The fix: a lightweight approval process. Not a ban on unapproved tools. A simple form: what tool, what data enters it, what is the business justification, has the terms of service been reviewed. Approval takes a day, not a quarter. The goal is visibility, not prohibition.
3. Oversight documentation
The finding: nobody wrote down who reviews AI outputs before they are acted on. The marketing team uses AI for customer communications. Who reviews them before they are sent? The finance team uses AI for reporting. Who verifies the numbers before they go to the board? The answer is usually "the person who generated it," which is not oversight.
The fix: one sentence per workflow. "AI-generated customer emails are reviewed by the marketing manager before sending." "AI-generated financial reports are verified against source data by the finance lead before distribution." One sentence. Per workflow. Written down.
4. Training records
The finding: no evidence that staff using AI tools received any training on appropriate use. They figured it out on their own. They learned from YouTube videos, colleagues, or trial and error.
The fix: documented training. Not a four-hour course. A 30-minute orientation that covers: what the tool does, what data is appropriate to enter, what the review process is, and what to do when the output seems wrong. Document who completed it. Update it when tools or policies change.
5. Vendor agreements
The finding: BAAs are missing where they should exist. Terms of service were not reviewed for data handling provisions. Nobody knows whether the AI vendor uses customer data for model training. The IT team assumed procurement handled it. Procurement assumed IT handled it.
The fix: a one-page vendor review checklist. Does the vendor store input data? For how long? Can data be deleted on request? Is data used for model training? Does the vendor's security certification cover AI-processed data? Is a BAA available and signed? One page. Completed before any AI tool is approved.
6. Model documentation
The finding: no record of which AI models are in use, at which version, for what purpose. The engineering team is using GPT-4. Or maybe GPT-4o. Or maybe they switched to Claude last month. Nobody documented the change.
The fix: a model registry. It can be a spreadsheet. Columns: tool name, model version, use case, data types processed, date deployed, owner. Updated when models change. Reviewed quarterly.
The pattern
None of these fixes require purchasing a platform. None of them require hiring a consultant for a six-month engagement. They require someone willing to spend thirty minutes on each category, write things down, and assign ownership.
The audit reveals the gaps. The fixes are administrative, not technical. The hardest part is doing it.
Book a governance sprint — a governance sprint starts with exactly this audit.
Get posts like this in your inbox
No spam. New articles on AI strategy, governance, and building with AI for small business.
Keep Reading
The governance sprint model explained: what happens in 2 weeks and what you own at the end
Full transparency on the engagement model: discovery, design, build, train, handoff. What you get and what you own.
Why the best AI strategy document is one page long
Most AI strategies are 40 pages nobody reads. The ones that work fit on one page with three sections.
The AI adoption ladder: where most companies stall and how to move past it
Five rungs from 'we use ChatGPT sometimes' to 'our team builds and owns AI tools'. Most teams are stuck on rung 2.