A solo practitioner in family law reviews roughly 40 contracts a month. Prenuptial agreements, separation agreements, custody arrangements, property settlements. Each one is 15 to 60 pages. Each one contains clauses that could cost her client six figures if missed.
She was spending 90 minutes per contract on first review. Not drafting. Not advising. Just reading, highlighting, and noting the clauses that needed attention. With 40 contracts a month, that is 60 hours of reading before any legal work begins.
She asked if AI could help. Not replace her review. Help her find what matters faster.
What the tool does
The tool reads a contract and produces a structured summary organized by risk category. It identifies:
Unusual clauses: anything that deviates from standard language for that contract type. A non-compete buried in a custody agreement. An arbitration clause with a specific venue requirement. A waiver of rights that is broader than typical.
Financial exposure: any clause that creates an uncapped obligation, a sliding scale payment, a penalty for non-compliance, or a trigger that changes financial terms based on conditions. The tool highlights these with the specific dollar amounts or percentage thresholds mentioned.
Missing standard protections: clauses that are typically present in this type of agreement but are absent. A prenuptial agreement without a sunset clause. A property settlement without a dispute resolution mechanism. The absence of something is harder to catch than the presence of something unusual.
Defined terms: every term that is defined in the agreement, cross-referenced with where it is used, flagged if a defined term is used inconsistently or if an important term is used but never defined.
What the tool does NOT do
It does not provide legal advice. It does not say whether a clause is good or bad. It does not recommend changes. It does not draft alternative language.
This distinction matters legally and practically. The tool is a research assistant, not a junior associate. It reads and organizes. The lawyer interprets and advises.
The CLAUDE.md file that governs this tool contains an explicit instruction: "Never state that a clause is problematic, unfair, or should be changed. Identify and categorize. Do not evaluate." This rule exists because the moment the tool evaluates, it crosses from assistance into practice, and that creates liability the lawyer did not sign up for.
The time impact
First review dropped from 90 minutes to 25 minutes. The lawyer still reads the entire contract. But instead of reading blind, she reads with a map. She knows before she starts where the unusual clauses are, what financial triggers exist, and what standard protections are missing.
The 25 minutes is the full review. She reads the AI summary in 3 minutes, then reads the contract with targeted attention. The total time is less than a third of what it was, and she catches more issues because the tool surfaces things that a tired lawyer at 6 PM on a Friday might miss.
What this required from the lawyer
She had to define what "unusual" means for each contract type she handles. The tool does not know that a non-compete in a custody agreement is unusual unless you tell it that non-competes are not standard in custody agreements.
She spent two sessions building what she calls her "pattern library" — a set of rules for each contract type that defines what is expected, what is optional, and what is a red flag. This pattern library is the real asset. The AI tool is just the engine that applies it at scale.
This is the part most people skip. They want the AI to "just know" what matters. It does not. It knows what you teach it. The lawyers who get the most value from AI tools are the ones who invest the time to encode their expertise into the system.
The ethical framework
Every summary the tool generates includes a header: "AI-assisted document analysis — not legal advice — attorney review required." This is not just a disclaimer. It is built into the template so it cannot be removed.
The tool logs every document it processes with a timestamp, the model version used, and the rules applied. If a client ever questions whether something was missed, there is a complete audit trail showing exactly what the tool flagged and what the lawyer decided to do about it.
The lawyer does not share the tool's output with clients. She uses it internally to guide her own review. The advice the client receives comes from the lawyer, informed by the tool, but filtered through professional judgment.
The competitive advantage
This lawyer now handles 40 contracts a month in the time it used to take her to handle 25. She did not hire a paralegal. She did not raise her rates. She increased her capacity by 60% and improved her catch rate at the same time.
Her competitors are still reading contracts cold. They are competent lawyers who will eventually catch everything the tool catches. But "eventually" takes 90 minutes instead of 25, and "eventually" sometimes means the Friday afternoon review misses the unusual arbitration venue clause on page 47.
The tool does not make the lawyer unnecessary. It makes the lawyer faster, more thorough, and more dangerous in negotiations because she walks into every meeting having already found the three clauses the other side was hoping she would miss.
The broader pattern
Every profession that involves reviewing documents — contracts, medical records, financial statements, compliance filings, insurance claims — has this same opportunity. The pattern is always the same: define what you are looking for, build a tool that finds it, review with a map instead of reading blind.
The lawyers who build these tools first will handle more clients, catch more issues, and deliver better outcomes. The lawyers who wait will compete against lawyers with a 60% capacity advantage who never miss a clause.
Get posts like this in your inbox
No spam. New articles on AI strategy, governance, and building with AI for small business.