I sit in on AI budget presentations. Not as a board member — as the person who built the tools being discussed, or as an advisor to the team presenting. I have watched executives present AI investments to boards, and I have watched boards approve budgets that should not have been approved and question budgets that should have been obvious.
The pattern is always the same. The presenting team shows impressive capability demos. They reference market trends. They cite competitor activity. They show a projected ROI that assumes everything goes perfectly. The board asks a few questions, mostly about timeline and headcount, and approves.
Twelve months later, the project has spent 80% of its budget and delivered 20% of its projected value. Nobody is surprised except the board.
Here are the three questions that would change the outcome.
Question 1: What specific problem does this solve, and how do we measure whether it is solved?
This sounds obvious. It is not asked often enough, and when it is asked, the answers are usually too vague to be useful.
Bad answer: "This AI initiative will improve our customer experience." That is a goal, not a problem. You cannot measure "improved customer experience" against a specific investment.
Better answer: "Our customer service team currently takes an average of 47 minutes to resolve a technical support ticket. We believe AI-assisted triage and response suggestion can reduce that to 20 minutes."
The better answer gives the board three things: a specific metric (resolution time), a current baseline (47 minutes), and a target (20 minutes). In six months, the board can look at the resolution time metric and determine whether the investment is working.
The board should push until the answer includes a number that exists today and a number that should exist after the investment. If the presenting team cannot identify the current metric, they do not understand the problem well enough to solve it. If they cannot commit to a target, they do not have enough confidence in the solution to deserve the budget.
This question also eliminates the most wasteful category of AI spending: projects that are solutions looking for problems. "We should have an AI strategy" is not a problem statement. "Our claims processing takes 5 days and our competitor does it in 2" is a problem statement. Boards should fund problem statements, not strategy statements.
Question 2: What is the smallest version of this that we can build and test in 30 days?
AI projects fail most often because of scope, not technology. The presenting team describes a comprehensive platform that will take 18 months and $2 million. The board approves the $2 million based on the comprehensive vision. At month 6, the team discovers that a foundational assumption was wrong, but they have already built infrastructure for the comprehensive version and cannot easily pivot.
The 30-day question forces the presenting team to identify the core value proposition and separate it from the surrounding complexity. If the project is "AI-powered claims processing," the 30-day version might be "AI-assisted document classification for the three most common claim types."
The 30-day version costs a fraction of the full budget. It produces a real result that can be measured. It reveals whether the underlying approach works with the organization's actual data, actual processes, and actual users. If it works, the board approves the next phase with confidence based on evidence. If it does not work, the board has lost a small investment instead of a large one.
Every AI project I have seen succeed started small and expanded based on results. Every AI project I have seen fail started large and tried to deliver everything at once.
The board should be skeptical of any AI proposal that cannot identify a meaningful 30-day deliverable. If the team says "the architecture requires 6 months before we can show any results," the architecture is wrong. Modern AI tools can produce working solutions in days, not months. A 6-month runway before visible results is not a technology constraint. It is a planning failure.
Question 3: Who specifically is accountable for the outcome, and what happens if the target is not met?
AI budgets often have shared ownership, which means no ownership. "The AI initiative is a partnership between IT, operations, and the innovation team." Translation: if it fails, everyone will point to someone else.
The board should require a single accountable person — by name, not by title — who owns the outcome metric identified in Question 1. That person's performance evaluation should include the success or failure of the AI investment. Not their effort. Not the number of models deployed. The outcome.
This question changes behavior immediately. When nobody is personally accountable for an AI project's outcome, the project optimizes for activity — meetings held, models trained, dashboards built, presentations delivered. When one person is accountable for the outcome, the project optimizes for the metric — did resolution time drop from 47 minutes to 20 minutes?
The board should also ask what happens if the target is not met at the 30-day checkpoint. The answer should not be "we will reassess." The answer should be specific: "If document classification accuracy is below 85% at 30 days, we will either retrain with a larger dataset or pivot to a semi-automated approach. The decision will be made by [name] within one week of the checkpoint."
This is not about punishing failure. It is about ensuring that failure is detected early and responded to quickly instead of being hidden in status reports that emphasize activity over outcomes.
What these questions actually do
These three questions do not require the board to understand AI technology. They require the board to apply the same discipline to AI investments that they apply to every other investment: specificity, incremental proof, and accountability.
A board member does not need to know the difference between a large language model and a random forest to ask "what specific metric will this change?" They do not need to understand neural network architecture to ask "what can you show me in 30 days?" They do not need a computer science degree to ask "who is accountable?"
The technology changes every six months. The questions do not change. Any investment — AI or otherwise — should be able to answer what problem it solves, what the smallest proof of value looks like, and who owns the result.
The investment that should not have been approved
I watched a board approve a $1.4 million "AI transformation" budget for a manufacturing company. The presentation was polished. The market analysis was thorough. The vendor's demo was impressive. The board approved it unanimously.
Eighteen months later, the company had an AI-powered dashboard that showed manufacturing metrics that were already available in their existing ERP system. The metrics were displayed with nicer charts. The AI component was a natural language query feature that 4 of 300 employees used. The other 296 used the same Excel exports they had always used.
Total value delivered: approximately $20,000 per year in time savings for those 4 employees. Total cost: $1.4 million plus $180,000 per year in licensing. The board never asked the three questions. If they had, the presenting team would have struggled to identify a specific metric, could not have proposed a 30-day proof, and would have been unable to name a single person accountable for the outcome.
The investment that should have been larger
At the same company, a production supervisor built a tool on his own initiative that predicted machine maintenance needs based on vibration sensor data. He spent $8,000 on the tool. It reduced unplanned downtime by 23% in his department, saving approximately $340,000 per year.
The board did not know about this tool until I mentioned it in a follow-up meeting. The production supervisor had not asked for budget because $8,000 came out of his department's discretionary spending. The $340,000 in savings was documented in his quarterly operations report, but nobody connected it to the AI tool because it was not part of the official AI initiative.
The three questions work in both directions. They prevent overinvestment in vague initiatives. They also surface underinvestment in specific tools that are already proving their value.
Get posts like this in your inbox
No spam. New articles on AI strategy, governance, and building with AI for small business.
Keep Reading
I stopped reading AI news. Here is what I do instead that actually matters.
The AI news cycle is designed to keep you reading. Building is designed to keep you learning. I chose building. Here is why and what changed.
What happens when you give a CEO Claude Code for a week. Five patterns I see every time.
I have trained over a dozen executives to use Claude Code. The same five things happen every time, in roughly the same order. Here is the pattern.
Your AI strategy document is 40 pages long and nobody has read it. Here is what to do instead.
The companies that succeed with AI do not have strategy documents. They have working tools that solve specific problems. The strategy emerges from what works.