Skip to main content
Back to Blog
AI AgentsData GovernanceAI GovernanceSmall Business

Your Data Isn't Ready for AI Agents. Neither Is Anyone Else's.

MIT Technology Review reports that only 1 in 10 companies can actually scale AI agents. The bottleneck isn't the models. It's the data underneath them. Here's what that means if you run a small business.

Admin User
March 19, 2026
7 min read
Share

MIT Technology Review published a piece this month that should be required reading for every business owner thinking about AI agents. The headline was about data infrastructure, but the real story is simpler and harder to hear.

Almost nobody's data is ready for what's coming next.

The numbers are stark. Nearly two-thirds of companies were experimenting with AI agents by late 2025. But only one in ten actually scaled them into production. Not because the models failed. Because the data underneath those models wasn't fit for purpose.

And if that's true for enterprises with dedicated data teams and seven-figure budgets, what does it mean for a small business running on spreadsheets, email threads, and a CRM that hasn't been cleaned since 2022?

It means we need to have an honest conversation.

What Ai Agents Actually Need

An AI agent isn't a chatbot. It's not a tool you prompt and wait for a response. An AI agent is software that acts on your behalf. It reads your data, makes decisions based on that data, and executes actions in the real world. It sends emails. It updates records. It triggers workflows. It makes judgment calls that used to require a human.

For that to work, the agent needs to trust the data it's reading. And more importantly, you need to trust the data you're feeding it.

Here's what that requires in practice. Clean, consistent records. No duplicate contacts with three different spellings of the same company name. No revenue fields where some entries are monthly and some are annual and nobody documented which is which. No customer addresses that haven't been updated since the business moved two years ago.

It requires data that has context. A number in a spreadsheet means nothing without knowing what it represents, when it was captured, and whether it's still current. Enterprise data teams call this "semantic context." Small businesses call it "knowing what the spreadsheet columns actually mean." Same problem, different vocabulary.

And it requires data that's connected. Your CRM knows about your contacts. Your invoicing tool knows about your payments. Your email knows about your conversations. But if those three systems don't talk to each other, your AI agent is working with one-third of the picture and making decisions based on incomplete information.

Most small businesses have data scattered across a dozen tools that were never designed to work together. That's the real infrastructure gap.

The Governance Problem Nobody Wants to Talk About

Here's where it gets uncomfortable. Even if your data is clean and connected, there's a harder question: who's in charge of it?

Data governance sounds like something only banks and hospitals need to worry about. It's not. The moment you let an AI agent access your business data, governance becomes your problem whether you planned for it or not.

More than three-quarters of companies admit their AI governance doesn't keep pace with how employees are actually using AI. Only one in five has a mature model for governing autonomous AI agents. And these are companies with compliance departments and chief data officers. Small businesses are flying completely blind.

Data governance means knowing the answers to questions like these. Who can access what data? What happens when a record is wrong and the AI acts on it anyway? How do you trace back a bad decision to the data that caused it? If a customer asks you to delete their information, can you actually find everywhere it lives?

If you can't answer those questions today, you're not ready for AI agents. And the risk isn't theoretical. When an AI agent sends a follow-up email to a customer who already cancelled because your CRM wasn't updated, that's a governance failure. When an agent quotes the wrong price because it pulled from an outdated spreadsheet, that's a governance failure. When an agent processes sensitive customer data through a tool you didn't realize was storing it overseas, that might be a legal failure.

The stakes go up the moment the AI starts acting instead of just answering.

The Ai Governance Wave is Coming

The EU AI Act becomes fully enforceable in August 2026. If you sell to European customers, or if you use AI tools built by companies that do, this affects you.

The Act requires documentation of AI systems, risk classification, transparency about when AI is making decisions, and data quality standards for anything classified as high-risk. Over half of organizations don't even have a complete inventory of what AI systems they're currently using. Without knowing what AI exists in your business, compliance planning is impossible.

For enterprises, the documentation burden alone is massive. Technical documentation, risk assessments, testing records, data governance materials. Most companies don't maintain this level of documentation for traditional software, let alone for AI agents that are making autonomous decisions across business processes.

For small businesses, the picture is even more challenging. You probably don't have a compliance team. You might not know which of your tools use AI under the hood. You almost certainly haven't classified your AI usage by risk level.

The good news is that the EU AI Act includes provisions for SME support and lighter requirements for lower-risk applications. The bad news is that "I didn't know I was using high-risk AI" is not a defense. And the line between low-risk and high-risk is thinner than most people realize.

If an AI agent is making decisions about who gets contacted, who gets quoted what price, or how customer complaints are prioritized, you're closer to the high-risk line than you think.

What Small Businesses Actually Need to Do

I'm not going to tell you to hire a chief data officer or build an enterprise data warehouse. That's not realistic and it's not necessary. But there are concrete steps that matter right now.

First, audit your data. Not a fancy audit. A simple one. Where does your business data actually live? How many tools and spreadsheets contain customer information? When was the last time someone checked whether the data in your CRM matches reality? You can't govern what you can't see.

Second, pick a source of truth. For every important category of data, contacts, deals, invoices, projects, there should be one system that's authoritative. Everything else is a copy. When there's a conflict, the source of truth wins. This single decision eliminates half the data quality problems most small businesses have.

Third, clean before you automate. Every AI agent you deploy will amplify the state of your data. Good data becomes good decisions at scale. Bad data becomes bad decisions at scale, faster than a human would make them, and harder to catch because you've stopped checking every output.

Fourth, write down the rules. Not a hundred-page policy document. A one-page document that answers: what data can the AI access? What can it do with that data? What requires human approval before action? What happens when something goes wrong? That one page is your governance framework. It doesn't need to be perfect. It needs to exist.

Fifth, know what AI you're using. Make a list. Your email tool might have AI features. Your CRM might use AI for lead scoring. Your scheduling tool might use AI for optimization. If you can't list every AI system touching your business data, you have a governance gap.

And sixth, start small. The MIT Technology Review piece makes this point clearly. Companies that try to scale AI agents across the entire business before getting the data foundation right are the ones that fail. Start with one process. Get the data clean for that one process. Let the agent prove itself in a controlled environment. Then expand.

The Real Opportunity

Here's the thing that the doomsday framing misses. This is actually good news for small businesses that take it seriously.

If only one in ten companies can scale AI agents because of data problems, then the businesses that solve their data problems first have a massive advantage. Not a theoretical future advantage. A right-now advantage. Because while your competitors are still running pilots that never graduate, you'll have agents that actually work.

Small businesses have an advantage that enterprises don't. Your data is simpler. You have fewer systems. You have fewer stakeholders who need to agree on definitions and policies. What takes an enterprise two years of committee meetings, a small business can do in a weekend.

Clean up your CRM. Connect your tools. Write down the rules. That's not a million-dollar data infrastructure project. That's a Saturday.

The companies that will win with AI agents aren't the ones with the biggest budgets or the fanciest models. They're the ones with clean data, clear rules, and the discipline to get the foundation right before they start building on top of it.

The models are ready. The agents are ready. The question is whether your data is ready to be trusted by software that's going to act on it without asking you first.

Get the foundation right. Everything else follows.

Get posts like this in your inbox

No spam. New articles on AI strategy, governance, and building with AI for small business.