Skip to main content
Back to Blog
AI StrategySmall BusinessProcessData Quality

Three Things That Need to Be True Before You Start Any AI Project. None of Them Are Technical.

The reason most AI projects fail has nothing to do with the model, the data pipeline, or the prompt. It has everything to do with three questions nobody bothered to answer before the project started.

Admin User
March 20, 2026
8 min read
Share

Every week I talk to a business owner who wants to bring AI into their operation. They've seen the demos. They've read the case studies. They're ready to move.

And almost every time, within the first thirty minutes of conversation, I find the same three gaps. Not technical gaps. Organizational ones. The kind that no model and no amount of engineering can fix.

These aren't the problems that show up in the pitch deck. They're the ones that show up six weeks into the project when everything stalls and nobody can explain why.

So before you spend a dollar on AI, before you hire a vendor or spin up a pilot, answer these three questions honestly. If you can't, that's your real starting point. And honestly, that might be the most valuable work you do all year.

Your Process Lives in Somebody's Head

The first thing that needs to be true: the process you want to automate is documented.

Not perfectly documented. Not a hundred-page operations manual with flowcharts and swimlane diagrams. Just documented enough that someone other than the person who does it every day can understand how it works.

Here's what I see constantly. A company wants to automate their client onboarding process. Great use case. Real time savings. Clear ROI. We start mapping it out, and within ten minutes it becomes obvious that the "process" is actually a set of habits that three people developed independently over five years. Maria handles it one way. James handles it differently. And when Sarah covers for them, she does a third thing entirely.

None of this is written down. The variations aren't bugs in the process. They are the process. And nobody realized it because the humans doing the work absorbed those variations unconsciously. They adapted. They made judgment calls without thinking about them. They knew when to skip a step and when to add one based on context that existed nowhere except in their heads.

AI doesn't absorb variations unconsciously. AI needs rules. It needs a defined path with defined exceptions. If you tell an AI agent to "handle onboarding the way Maria does it," you need to be able to describe what Maria does. In detail. Including the parts Maria doesn't even realize she's doing.

This is why the documentation step is non-negotiable. And here's the part nobody talks about: sometimes the best outcome of an AI exploration is that you fix the process before AI is even involved. The act of documenting a workflow, of forcing it out of someone's head and onto paper, often reveals redundancies, contradictions, and wasted steps that have been invisible for years.

I've worked with companies where documenting the process saved more time than automating it would have. The AI project became unnecessary because the problem wasn't inefficiency that needed automation. It was chaos that needed structure.

"Reasonably documented with known variations" is the bar. You don't need perfection. You need enough clarity that a smart new hire could follow it after a week of training. If you can't get there, you're not ready for AI. You might not even be ready for a new employee.

Fix the process first. Everything else gets easier after that.

You Don't Know What You Actually Have

The second thing that needs to be true: you're honest about your data.

I cannot tell you how many AI projects I've seen kick off with a cheerful "we have tons of data" followed, three months later, by the quiet realization that most of it is unusable.

Data problems come in flavors, and it matters which ones you have because they require completely different responses.

Flavor one: your data is scattered across systems. Your customer information lives in a CRM, your financial data lives in QuickBooks, your project notes live in Google Docs, and your communication history lives in email and Slack. None of these systems talk to each other. This is an integration problem. It's real work and it costs real money, but it's solvable. You budget for it and you build the connections. This is not a reason to abandon an AI project.

Flavor two: the data you need doesn't exist yet. You want to build a system that predicts customer churn, but you've never tracked the signals that indicate churn. You want to automate quality checks, but you've never recorded what a quality failure looks like in a structured way. This changes your scope and timeline dramatically, but it's also not a deal-breaker. It just means your AI project starts with a data collection phase, and you need to set expectations accordingly.

Flavor three, and this is the deal-breaker: you know the data is bad and you're pretending it isn't.

Duplicate records that have been accumulating for years. Fields that were supposed to be mandatory but half the entries are blank. Revenue numbers that mean different things depending on who entered them and when. Dates in three different formats. Categories that were renamed twice and nobody cleaned up the old entries.

This isn't a data quality problem you can fix with a cleanup sprint before the AI project starts. This is a cultural problem. It means your organization has been tolerating messy data because humans are good at working around it. The person reading the report knows that when the revenue field says zero, it actually means "not entered yet" rather than "no revenue." The person sending the invoice knows that the contact listed is outdated and uses the email they have in their own address book instead.

AI doesn't work around bad data. AI trusts it. And it acts on it with the same confidence whether the data is correct or garbage.

Being honest about your data means answering these questions without flinching. How much of our data is actually complete and current? What percentage of records would we trust enough to let software act on them automatically? If we pulled a random sample of a hundred records, how many would have errors that could lead to a wrong decision?

If those answers make you uncomfortable, that's useful information. It means your first project isn't an AI project. It's a data project. And that's okay. But pretending your data is better than it is will burn more money and more time than any other mistake you can make.

Nobody Owns it

The third thing that needs to be true: someone specific in the business owns the AI initiative. Not IT. Not "the team." A person with a name, a title, and the authority to make decisions.

This is the one that kills more projects than anything else, and it's the one that gets the least attention.

Here's what happens when nobody owns it. The AI project gets proposed in a leadership meeting. Everyone agrees it's a good idea. It gets assigned to IT because "it's a technology thing." IT starts building. Three weeks in, they need a business decision. Should we optimize for accuracy or speed in this particular workflow? Do we handle edge cases now or flag them for human review? What's the acceptable error rate?

IT can't answer those questions. Those aren't technical decisions. They're business decisions that require someone who understands the workflow, feels the pain of the current process, and has the authority to say "good enough, ship it" or "no, this needs to be better."

So IT sends an email to the department head. The department head is busy. A week passes. IT moves on to another part of the build. The department head finally responds with a question that reveals they don't fully understand what was being asked. Another round of emails. Another week.

Multiply this by every decision point in the project. That's how a three-month project becomes a nine-month project that eventually gets shelved.

The pattern is always the same. An AI initiative that belongs to everyone belongs to no one. And when it belongs to no one, nobody is accountable for its success, nobody has the authority to make the trade-off decisions that every project requires, and nobody notices when it starts drifting until it's too late to course-correct.

Every successful AI project I've worked on has a business owner. Not a technical lead. A business stakeholder who understands the problem domain, who will use the output, and who has enough authority to make quick decisions about scope, trade-offs, and priorities.

This person doesn't need to be technical. They need to care about the outcome. They need to be the person whose daily work gets better when the AI works and worse when it doesn't. That's the motivation that drives decisions forward instead of letting them sit in someone's inbox for a week.

If your AI initiative is an orphan bouncing between departments with no single person accountable for its success, stop everything else and solve that problem first. It's more important than the technology, the data, and the vendor selection combined.

The Real Starting Point

None of these three things are technical. None of them require AI expertise. None of them require a budget, a vendor, or a model.

They require honesty. About how organized your processes actually are. About how clean your data actually is. About whether anyone actually owns this initiative or whether it's just a good idea floating in the space between departments.

The companies that succeed with AI aren't the ones with the best models or the biggest budgets. They're the ones that did the unglamorous work first. They documented their processes. They cleaned their data. They put someone in charge who had both the context and the authority to drive it forward.

That work isn't exciting. It doesn't make for a good LinkedIn post. Nobody's going to invite you to speak at a conference because you fixed your CRM data and wrote down how your invoicing process actually works.

But it's the work that determines whether your AI project ships or stalls. And it's the work that has value even if you never build the AI at all. Better processes, cleaner data, and clearer ownership make your business better regardless of what technology you layer on top.

So before you start your next AI initiative, ask yourself three questions.

Is the process documented well enough that someone new could follow it?

Am I being honest about the state of our data?

Does someone specific own this, with the authority to make decisions and the motivation to see it through?

If all three answers are yes, you're ready. Build with confidence.

If any of them are no, you just found the most important work you can do before the AI conversation even starts.

Get posts like this in your inbox

No spam. New articles on AI strategy, governance, and building with AI for small business.