The healthcare COOs who succeed with AI follow a specific sequence. The ones who struggle skip steps. Here is the sequence.
Days 1-30: Governance audit
Before anything else, answer four questions.
What data do you actually have and where does it live? Most healthcare organizations cannot produce a complete inventory of their data systems in under a week. Patient records in the EHR. Billing data in a separate system. Scheduling data in a third. Staff records in HR software. Research data in department-specific databases. The data map comes first because every AI tool you deploy will interact with this data.
What AI tools are already in use that nobody approved? Survey your department heads. Ask directly: "What AI tools are you or your team using right now?" You will find ChatGPT in marketing, AI transcription in clinical notes, AI scheduling assistants, and probably three or four others. None of them were formally approved. None of them have BAAs in place.
What is your current HIPAA exposure if an AI tool mishandles PHI? Take the tools you just discovered and ask: "What patient data enters this tool? Where does that data go? Is there a BAA?" If the answer to the third question is no for any tool processing PHI, you have an active compliance gap.
What is your written AI policy? If it does not exist, write one. It does not need to be long. One page that says: what tools are approved, what data can enter them, who approves new tools, and what happens when something goes wrong.
Output of the first 30 days: a data inventory, a tool inventory, a gap assessment, and a written policy. This is governance infrastructure. Everything else builds on it.
Days 31-60: Team capability baseline
Now that governance is in place, find out who can do what.
Which department heads can build a simple workflow tool if given training and time? Which ones cannot? The answer is not correlated with title or seniority. It is correlated with willingness to learn and comfort with directing AI tools.
The leaders who cannot build cannot effectively govern. They cannot evaluate whether an AI tool's output is correct. They cannot assess whether a vendor's claims are realistic. They cannot make informed decisions about AI deployment in their departments. That gap is the training priority.
The training plan should be scoped to role, not title. A department head does not need to become a developer. They need to be able to: describe a workflow clearly enough to direct an AI build, evaluate whether the output matches the description, and identify when something looks wrong.
Output of the second 30 days: a capability assessment by department and a role-specific training plan.
Days 61-90: Selective tool deployment
Now you deploy. But selectively.
Only in departments that have governance in place. Only where the department head has completed training. Only with review processes defined before the tool goes live.
Start with one or two tools. Not ten pilots across the organization. A single scheduling optimization tool in one department. A single reporting dashboard for one compliance workflow. Measure the outcome. Document the process. Learn from it before expanding.
Output of the third 30 days: one or two deployed tools with governance, training, and review processes in place. Not ten pilots with no follow-through.
Why sequence matters
The COOs who deploy first and govern later spend months cleaning up. The AI tool in radiology that was processing images without a validated workflow. The chatbot in patient services that was storing conversation logs with PHI. The reporting tool that three departments adopted independently with three different data handling practices.
The COOs who govern first, train second, and deploy third move slower in month one and faster in month six. By the end of the year, they have a governed, capable organization. The deploy-first COOs have a list of incidents.
Explore healthcare enterprise training — our enterprise healthcare track covers this sequence end to end.
Get posts like this in your inbox
No spam. New articles on AI strategy, governance, and building with AI for small business.
Keep Reading
What a HIPAA-aware patient intake tool actually looks like built with Claude Code
A concrete walkthrough of building a patient intake system with HIPAA guardrails baked into the development process.
Your company's data is leaving the building. Here's why your own LLM keeps it where it belongs.
Every time your team pastes patient records, transaction logs, or internal documents into a third-party AI tool, that data leaves your control. Building your own LLM changes the equation entirely.