I have read dozens of AI strategy documents. They are all the same. Forty pages of market analysis, capability assessments, maturity models, roadmaps with three phases, and governance frameworks that reference other governance frameworks.
The executive team spent six figures on a consulting firm to produce the document. The document was presented in a two-hour meeting. The executives nodded. The document was filed in SharePoint. Nothing happened for nine months.
Then a competitor launched something built with AI that actually worked. Panic. A new round of meetings. Someone pulls up the strategy document. "We already have a plan for this." But the plan describes a capability. It does not describe a tool. And capabilities do not ship. Tools do.
Why strategy documents fail
Strategy documents fail because they answer the wrong question. They answer "what could AI do for our organization?" The right question is "what specific problem are we going to solve first, and how will we know it worked?"
The first question produces a document. The second question produces a project.
A document that describes every possible application of AI across every department is comprehensive and useless. It gives leadership the feeling of progress without any actual progress. It creates the illusion that thinking about AI is the same as doing something with AI.
The companies I work with that succeed with AI do not start with strategy documents. They start with a problem. A specific, measurable, annoying problem that someone in the organization deals with every day.
What works instead
Here is the process that actually produces results.
Pick one problem. Not the biggest problem. Not the most strategic problem. The most specific problem. "Our customer service team spends 4 hours a day categorizing support tickets" is specific. "We need to transform our customer experience with AI" is not.
The problem should have three characteristics. First, someone can describe the current process step by step. Second, the data needed to solve it already exists somewhere in the organization. Third, the person who does this work every day will tell you whether the solution is good or not.
Build the tool. Not a proof of concept. Not a prototype. A working tool that the person with the problem can use tomorrow. This takes days to weeks, not months. If it takes months, the scope is too large.
Measure it. Compare the time spent, the error rate, the output quality, or whatever metric matters for this specific problem. Before and after. Numbers, not opinions.
Decide what to do next based on what you learned. If the tool worked, build the next one. If it did not work, understand why and either fix it or pick a different problem. Either way, you have learned something concrete about how AI works in your specific organization with your specific data and your specific people.
The strategy emerges
After you have built three or four tools that solve real problems, something interesting happens. You have an AI strategy, but you did not write it in a document. It emerged from the work.
You know which types of problems AI solves well in your organization. You know what your data quality issues are and where they matter. You know which teams adopt AI tools quickly and which resist. You know what governance controls work in practice, not in theory.
This knowledge is worth more than any strategy document because it is grounded in what actually happened, not what a consulting firm predicted would happen based on what happened at other companies.
The maturity model trap
AI maturity models rank organizations on a scale from "AI aware" to "AI native." They suggest that you must progress through each stage sequentially, building capabilities in a specific order, establishing governance before experimentation, and aligning stakeholders before building anything.
This is wrong. You do not need organizational AI maturity to build a tool that categorizes support tickets. You need someone who understands the support ticket problem and someone who can build the tool. That is it.
Maturity models exist because consulting firms need something to assess and something to sell a remediation plan for. They create work for consultants. They do not create working tools for organizations.
Build a tool. Use it. Learn from it. Build the next one. After ten tools, you are more "AI mature" than any maturity model could have made you, because your maturity is built on real experience with real tools solving real problems.
The governance question
"But we need governance before we can build anything." No. You need governance around the thing you build. You cannot govern a theoretical AI application. You can govern a specific tool that processes specific data for a specific purpose.
When you build a tool that categorizes support tickets, the governance is concrete. What data does the tool see? Customer names and message content. What can it do with that data? Categorize it. What can it not do? Access payment information, modify customer records, or send responses. Where is the data stored? In the existing ticketing system. Who reviews the categorizations? The support team lead, daily.
That is governance. It is specific to the tool. It took ten minutes to define because the tool is specific enough to govern. Try governing "enterprise AI capabilities" in ten minutes. You cannot, because the scope is infinite.
What to do Monday morning
Cancel the next AI strategy meeting. Instead, spend that hour talking to three people in your organization who do repetitive, data-heavy work. Ask each of them: what do you spend the most time on that you wish a computer could handle?
You will get three specific problems. Pick the one that is most clearly defined, has the most accessible data, and has the most willing person to test a solution.
Build it. This week. Not a plan to build it. The actual tool. If you do not have someone who can build it, hire someone for the project or call us. But do not hire a consulting firm to write another strategy document about it.
In four weeks, you will have a working tool, a real measurement of its impact, and more practical AI knowledge than any strategy document could give you. In twelve weeks, you will have three or four tools and an emerging strategy based on reality.
That is how AI adoption actually works. Not top-down strategy. Bottom-up problem solving, one tool at a time.
Get posts like this in your inbox
No spam. New articles on AI strategy, governance, and building with AI for small business.
Keep Reading
5 things health systems consistently get wrong when implementing AI
The five failure patterns that appear in every healthcare AI engagement, and what they cost.
Why the best AI strategy document is one page long
Most AI strategies are 40 pages nobody reads. The ones that work fit on one page with three sections.
How healthcare COOs should think about AI in their first 90 days
A 90-day framework for healthcare COOs: governance first, training second, tools third.