AI Apocalypse Burnout, and Why You’re Not as Behind as You Think
’ll be honest. A few months ago, I started dreading opening LinkedIn.
Not because of the job market. Not because of the economy. Because every single day my feed was, and still is, drowning in AI influencer content designed to make me feel like I was failing.
New model drops. New tool launches. Copilot embedded in everything. OpenClaw (formerly Moltbot, formerly Clawdbot) exploding to 145,000 GitHub stars overnight, with breathless posts about desktop agents that can “run entire businesses solo.” And in between, a steady drip of AI-generated articles about how Sarah, a former PA, now makes £15k a month just from prompting.
I’m a Technology & Security Director with over 20 years in legal IT. I hold certifications across cybersecurity and AI. I lead AI strategy at my firm. I’m doing a Level 7 AI & Data apprenticeship. I use multiple AI Subscriptions. I setup my own AI lab at home to experiment local models on top of my day job. And I still felt like I was falling behind.
That feeling isn’t a personal failing. It’s by design.
The AI influencer content cycle runs on urgency. “The window closes fast.” “Before it’s too late.” “By end of 2026 this will be table stakes.” Every week brings a new model that will apparently render last week’s skills obsolete, a new agent framework that changes everything, and a new story about someone who went from zero to six figures in 90 days with nothing but prompts and a Zapier account.
Most of it is noise. A lot of it is fiction. Almost all of it is selling something.
Then there are the CEOs of AI companies, the ones with the most to gain from us believing all the hype, confidently predicting that all knowledge work will be automated within a few years. That narrative is everywhere. What gets far less airtime is what the research actually shows.
The Remote Labor Index recently tested leading AI models on real paid freelance work, product design, game development, data analysis, scientific writing. The kind of work we’re told AI will replace imminently. The best performing model failed over 96% of tasks. Not because of wrong answers, but practical delivery failures: corrupt files, incomplete projects, not following the brief. The last mile of professional work, the bit clients actually pay for, is exactly where models consistently fall apart.
A separate study by Scale AI and the Centre for AI Safety tested models on real-world freelance projects. The best performer had just 2.5% of its work judged acceptable by a panel of 40 independent reviewers. Another leading model managed 0.8%.
These are the same models scoring near the ceiling on the benchmarks AI companies put in their press releases.
MIT research suggests around 95% of AI projects aren’t delivering measurable returns. Are the models still improving? Of course. However, the gap between what’s being promised and what’s actually working in real organisations is vast, and that gap never goes viral.
To be clear: I’m not saying AI isn’t a transformative technology, it is. I use multiple models every day, for work, for study, and in personal AI projects. For data analysis, interacting with complex datasets, note-taking, brainstorming, document analysis, comparison, production, and automation, certain types of coding and automation, AI is a genuinely incredible tool. I’ve seen and used many excellent products built specifically to help professionals accelerate their work and unlock insights that would otherwise take weeks. That’s not in question. What I’m saying is that how we use it matters enormously, and right now, the conversation around it is badly out of shape.
The flood of solo AI agency millionaire stories deserve a direct response, because they follow an identical template and they’re everywhere.
Ask yourself: what serious company is going to hand critical business workflows to a one-person operation with no history, no professional indemnity insurance, no business continuity plan, and no ability to pass a vendor due-diligence questionnaire? None, not any organisation with a procurement function and a legal team. The people supposedly paying £3–10k a month for a stranger’s prompting services simply don’t exist at that scale. The real business model in these articles is almost always the article itself, building an audience to eventually sell a course.
Here’s what concerns me more than the hype itself. I see it in conversations with colleagues, in professional communities, and in the wider discourse.
Experienced professionals who are genuinely skilled at their jobs feel worried, threatened and inadequate. Technologists who have spent careers building real expertise wonder if any of it counts anymore. And children — this is the part that I think should stop us cold, are starting to question why they should bother learning anything at all if AI will do it for them.
That’s the real cost of the hype cycle. The corrosion of confidence, and in young people especially, the motivation to develop deep knowledge in the first place.
There’s something deeper at stake that I don’t think we talk about enough.
Building genuine expertise isn’t just professionally essential, it’s integral to what it means to be human. The years spent mastering a craft, the hard-won judgment that comes from failure and iteration, the satisfaction of producing something truly excellent, these aren’t inefficiencies waiting to be automated. They’re how we grow. They’re how we find meaning.
AI can generate high quality text, music, video, and images. But there is a profound difference between generated and crafted. When a musician finds a note that says what words can’t, when a writer chooses the perfect word, that is something different in kind, not just degree. It carries the weight of a human mind and human experience. Whilst an AI can produce an output that resembles it. It cannot produce the thing itself.
The people creating real value are applying AI to domains where they already have deep expertise. A solicitor who understands contract law and uses AI to accelerate document review. An engineer who knows the codebase and uses it to cut down repetitive code writing. A CISO who understands risk and uses it to draft policy faster. Expertise comes first. AI amplifies it.
The models and the tools are improving. The technology is real. But by the industry’s own research, they still can’t reliably complete 96% of real professional work. The hype wants you anxious, distracted, and buying courses. I think the better response is to keep learning, keep building, your knowledge, your judgment, your craft.
We should be deeply wary of a culture that teaches people, especially young people, that learning is pointless because AI will do it for them.
Next week I’ll be looking at what the AI hype cycle is doing to the next generation, and why that conversation is the most important one we’re not having.


