Six months ago I stopped reading AI news. No more "GPT-5 is coming" articles. No more "this AI can now do X" threads. No more benchmark comparisons. No more industry analyst predictions about which company will win the AI race.
I did not stop because the news was wrong. I stopped because it was irrelevant to the work I do every day.
The consumption trap
For two years, I read everything. Every model release. Every benchmark paper. Every think piece about artificial general intelligence. Every prediction about which jobs would be automated first. I could tell you the difference between Llama 3 and Mistral Large on the MMLU benchmark. I could explain the architectural differences between mixture-of-experts and dense transformer models. I knew which companies were hiring and which were laying off their AI teams.
None of this knowledge helped me build anything.
I was consuming information about AI without producing anything with AI. I was an expert spectator. I could discuss AI at a dinner party. I could not build a tool that solved a problem.
The consumption trap works like this: every article gives you the feeling of learning without the substance of learning. You finish reading and you know a new fact. But facts about AI capability do not translate into ability to use AI. Knowing that a model scores 89% on a reasoning benchmark does not help you build a compliance dashboard.
What I do instead
I build something every day. Not a big project. Not a production system. Something small that solves a specific problem or tests a specific idea.
Monday I might build a tool that summarizes the previous week's git commits into a status report. Tuesday I might build a script that compares two CSV files and highlights the differences in a readable format. Wednesday I might rebuild something I built last month to see if I can do it better.
Each build takes 30 minutes to 2 hours. Each one teaches me something specific about how AI tools work in practice. Not in theory. Not according to a benchmark. In the actual experience of directing an AI to produce something useful.
In six months of daily building, I have learned more about what AI can and cannot do than in two years of reading about what AI can and cannot do.
What daily building teaches you
It teaches you where AI is reliable and where it is not. After building 150 small projects, I have a detailed mental map of what kinds of tasks Claude Code handles well on the first attempt and what kinds require multiple iterations. That map is personal — it reflects my communication style, my domain knowledge, and the types of problems I encounter. No article could give me that map because it is built from my experience, not someone else's.
It teaches you how to communicate with AI. My instructions on day 1 were vague and produced vague results. My instructions on day 150 are specific, structured, and produce accurate results on the first or second attempt. This skill improved through practice, not through reading about prompt engineering techniques.
It teaches you what is worth building and what is not. Some problems are genuinely better solved with a simple script than with an AI tool. Some problems are not worth solving at all — the manual process takes 5 minutes and an automated one would take an hour to build and maintain. Daily building gives you the judgment to tell the difference.
It teaches you speed. My first small project took three hours. Now, most take under an hour. The speed comes from pattern recognition — I have seen enough similar problems that I know where to start, what to specify, and what to let the AI decide. This is the same skill progression that happens in any craft. The first table a woodworker builds takes a week. The fiftieth takes a day. The wood did not get easier. The woodworker got better.
The news you actually need
I do not ignore all AI information. I pay attention to three things:
Model releases that I can use. When Claude gets a new capability that changes my workflow, I notice because I use Claude every day. I do not need a news article to tell me the model improved. I experience the improvement in my work.
Breaking changes in tools I use. If Claude Code changes its interface, its command structure, or its behavior on a type of task I rely on, that matters. I follow the changelog and the release notes for the tools in my stack. Those are operational documents, not news.
Regulatory changes that affect my clients. The EU AI Act, HIPAA guidance on AI in healthcare, SEC statements on AI in financial services — these change what I can build and how I build it. I read the primary sources, not the news articles about the primary sources.
Everything else — the speculation, the benchmarks, the corporate posturing, the investor narratives — is noise. It is interesting the way sports commentary is interesting. It does not make you a better player.
The social media problem
The hardest part of quitting AI news was social media. My Twitter feed was 80% AI content. My LinkedIn was full of AI thought leaders posting AI hot takes. Every scroll showed me someone's opinion about the future of AI.
I did not delete the accounts. I unfollowed the AI commentators and followed people who build things. Developers who post about what they built today. Business owners who share how they solved a specific problem. Teachers who explain concepts through examples, not predictions.
My feed went from "AI will transform everything" to "here is a tool I built that saves my team two hours a week." The second version is useful. The first version is entertainment.
The uncomfortable truth
If you are reading AI news every day and not building with AI every day, you are preparing for a future that does not require preparation. It requires practice.
You do not get better at AI by reading about AI. You get better at AI by using AI. Every day. On real problems. Making mistakes. Fixing them. Building something that works. Building something that does not work and understanding why.
The person who has built 100 small AI tools knows more about AI's practical capabilities than the person who has read 1,000 articles about AI's theoretical capabilities. That gap is not closing. It is widening every day, because the builder learns from doing and the reader learns from someone else's description of doing.
Start building. Today. Something small. Something that solves a problem you have right now. It does not matter if it is trivial. The first build is not about the output. It is about starting the practice.
Get posts like this in your inbox
No spam. New articles on AI strategy, governance, and building with AI for small business.
Keep Reading
What happens when you give a CEO Claude Code for a week. Five patterns I see every time.
I have trained over a dozen executives to use Claude Code. The same five things happen every time, in roughly the same order. Here is the pattern.
Your AI strategy document is 40 pages long and nobody has read it. Here is what to do instead.
The companies that succeed with AI do not have strategy documents. They have working tools that solve specific problems. The strategy emerges from what works.
What 6 months of building with AI actually did to the way I think
An honest account of what changed about problem-solving, decision-making, and running a company after six months with Claude Code.