These questions are for board meetings, not pitch meetings. They are designed for investors who already have portfolio companies deploying AI and want to know whether that deployment is an asset or a liability.
1. "Show me a tool your team built with AI, not bought."
Good answer: a live demo. The CEO opens a browser, navigates to an internal application, and shows you something the team built. They can explain what it does and how it works.
Bad answer: a list of SaaS subscriptions. "We use Jasper for content, Copilot for code, and Notion AI for documentation." That is consumption, not capability.
2. "Who owns your AI governance policy?"
Good answer: a name. "Sarah Chen, our Head of Operations. Here is the document. It was last updated six weeks ago."
Bad answer: "Our CTO handles that" with no written document to show. If there is no document, there is no policy. There is one person's unwritten opinion.
3. "What happens to your workflows if your primary AI provider changes pricing by 3x?"
Good answer: "We have thought about this. Our core tools are built on open-source models and our own infrastructure. For the vendor tools we use, we have identified alternatives and estimated migration time."
Bad answer: silence, followed by "that probably will not happen." It has already happened. Multiple times.
4. "How do you know when an AI output is wrong?"
Good answer: specific review processes. "Every AI-generated customer communication is reviewed by a team lead before sending. Financial reports are verified against source data. We have caught three errors in the last quarter through this process."
Bad answer: "We trust the model" or "the AI is very accurate." Models are wrong routinely. The question is whether anyone is catching it.
5. "What AI training has your non-technical leadership received?"
Good answer: documented training with specific outcomes. "Our VP of Sales completed a two-day program and built a proposal generator for the team. Our Head of Operations built a workflow tracker."
Bad answer: "They have all used ChatGPT" or "we sent everyone a link to a webinar." Using ChatGPT is not training. Watching a webinar is not capability.
6. "What data enters your AI tools and where does it go?"
Good answer: a data flow map. "Customer data enters our CRM AI for lead scoring. It is processed locally and not sent to external APIs. Our support team uses Claude for ticket drafting, which is covered under our enterprise agreement with data deletion policies."
Bad answer: "I would have to check with the team." If the CEO does not know what data enters the AI tools, nobody is governing it.
7. "Can someone on your team modify the AI tools you depend on?"
Good answer: "Yes. Our operations team was trained to maintain the tools they use. When we needed to add a new field to the reporting dashboard last month, they did it themselves."
Bad answer: "Our vendor handles updates" or "we would need to hire a developer." If the team cannot modify the tools they depend on, the tools are dependencies, not capabilities.
Why these questions matter
None of these questions require technical expertise to ask. They require caring about whether AI use at your portfolio companies is governed, capable, and defensible.
Companies that answer well are structurally more valuable. Companies that do not are carrying hidden risk.
Talk to us about portfolio AI readiness — we run AI readiness assessments for portfolio companies. The output is a scorecard your board can act on.
Get posts like this in your inbox
No spam. New articles on AI strategy, governance, and building with AI for small business.
Keep Reading
Require AI training before the next funding round. Here's why.
Three arguments for requiring AI literacy at portfolio companies: valuation risk, capability gaps, and trivial training costs.
The AI literacy due diligence checklist your deal team is missing
Six questions that reveal whether a company's AI capability is real or performative. Built for deal teams.