The AI Pixie Dust Problem & What the Hype Cycle Is Doing to Our Minds
Last week I promised to look at what the hype cycle is doing to the next generation. I’m going to make good on that, but I want to start somewhere that genuinely unsettled me- an online AI skills course I attended recently.
I had seen it advertised multiple times online, a free weekend course to learn AI, I was curious… The headline use cases presented were how to make money with AI tools without needing to understand any topic. They showed an automated workflow that ingests viral articles, then generates and posts AI-produced video on the same subject in the hope they go viral. AI feeding on AI content to produce more AI slop, at scale, automatically. Nobody in the room seemed to find that troubling. Then came developers proudly showing code AI had written for them, and proudly declaring they didn’t understand any of it and didn’t need to.
I use AI every day. Multiple models, multiple contexts. This is not an anti AI argument, or even an anti AI code argument, AI is a phenomenal coding tool, but code you don’t understand is a liability you can’t assess, a risk you can’t manage, and a skill you’ll never build. That’s not acceleration- that’s abdication. And if we’re doing it as professionals, what exactly are we modelling for the next generation?
There’s a substantial and growing body of peer-reviewed research showing that heavy, unstructured AI use measurably erodes critical thinking, and younger users are the most affected.
A 2025 study of 666 participants found a significant negative correlation between frequent AI use and critical thinking ability. The mechanism is cognitive offloading, delegating mental work to an external system. Younger participants showed the highest AI dependence and the lowest thinking scores. A 2025 MIT preprint paper found preliminary evidence of what they termed “cognitive debt”, decreased neural engagement over time in heavy AI users, and a reduced capacity to generate original ideas independently. While the researchers stress these are early findings, the direction of travel is consistent with the broader published literature.
The brain, like a muscle, atrophies without use. Unlike a muscle, you don’t always notice it happening.
What makes this particularly concerning for children and adolescents is that the developmental window is real. Adolescence is when executive functions, planning, analytical reasoning, and self-regulation are being formed. What happens during that window has lasting consequences. Critical thinking is not innate. It has to be built through effort and struggle. Remove the productive struggle, and you remove the learning. You’re left with something that looks like knowledge from the outside and is hollow on the inside.
This is also not a new pattern. When laptops arrived in classrooms, educational understanding never developed at the same pace as device distribution. EdTech has been here before. AI is just faster, more capable, and therefore more concerning when used without thought.
The same dynamic plays out with entry-level workers, and the consequences are structural.
Entry-level work has historically been the ladder. It’s where people learn professional judgement, develop subject knowledge, and build the cognitive architecture that makes them valuable. The junior lawyer reading a thousand contracts before drafting one. The analyst who spent months building reports before they understood what the numbers actually meant. The graduate sitting in meetings absorbing how decisions got made. None of that felt like training at the time. It was.
Deloitte’s 2025 Human Capital Trends found that two-thirds of hiring managers believe entry-level hires are already under-prepared. At the same time, AI is automating exactly those entry-level tasks, the drafting, the research summaries, the note-taking, the first-pass analysis that have always been how organisations quietly built junior talent. Remove that scaffolding and you don’t just cut jobs. You pull up the ladder.
I’m not saying we should stop. We won’t, and we shouldn’t have to. The efficiency gains are real, the cost savings genuine, and automating low-value repetitive work is an obvious win. But here’s the question nobody seems to be asking- if we automate the work that used to teach people, what replaces the teaching?
Because the learning didn’t come from the tasks themselves. It came from the friction. The moment a junior analyst got a number wrong and had to explain it to a partner. The first time a trainee’s draft came back covered in track changes. The slow accumulation of judgement that only comes from doing things imperfectly, under real conditions. That’s what produced capable professionals. Organisations need to be thinking long term about this, asking questions beyond “what can AI do?” and instead asking “what do we now need to do deliberately that used to happen by accident?” That has to become an intentional act, built into how we structure work, how we mentor, how we design roles. Not assumed. Designed.
This is what I’d call the cognitive mobility problem. We talk endlessly about social mobility, however I think the AI era is quietly redefining it, your ability to move through an economy increasingly determined by how well you can think, and whether you use AI as an extension of that thinking or a substitute for it. The IMF has flagged this explicitly, AI doesn’t equalise skill requirements, it amplifies existing differences in cognitive approach. The divide isn’t about who has access to the tools. It’s about what you bring to them.
The calculator analogy is overused but usually invoked wrong. We didn’t stop teaching algebra when calculators arrived. We offloaded the arithmetic so the human mind could go further into the maths. That’s the model here. Not here’s a tool that does the thinking, so you don’t need to learn how it works. The pixie dust isn’t the problem. Believing it does the work for you is.
Used well, AI is genuinely extraordinary. It can compress years of research into hours, surface patterns no human would find, and give capable people an almost unfair advantage. That last word is the key one, capable. The technology amplifies what you bring to it. Which means the most important investment any of us can make, for ourselves, for the people we lead, and for the next generation, is still the same one it’s always been. Learn deeply. Think carefully. Build judgement that’s actually yours. AI will take care of the rest.
Sources & Further Reading
Gerlich, M. (2025) — AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking mdpi.com/2075–4698/15/1/6
Gerlich, M. (2025) — AI and the Rise of Societal Bifurcation: Cognitive Dependency, Inequality and Democratic Pressure mdpi.com/2075–4698/16/3/82
Kosmyna, N. et al. (2025) — Your Brain on ChatGPT: Accumulation of Cognitive Debt (MIT Media Lab preprint, not yet peer-reviewed) arxiv.org/abs/2506.08872
Brookings Institution (2026) — AI’s Future for Students Is in Our Hands brookings.edu/articles/ais-future-for-students-is-in-our-hands/
Jose et al. (2025) — The Cognitive Paradox of AI in Education: Between Enhancement and Erosion pmc.ncbi.nlm.nih.gov/articles/PMC12036037/
Deloitte (2025) — AI, Demographic Shifts, and Agility: Preparing for the Next Workforce Evolution deloitte.com/us/en/insights/topics/talent/strategies-for-workforce-evolution.html
IMF (2024) — Gen-AI: Artificial Intelligence and the Future of Work imf.org/en/publications/staff-discussion-notes/issues/2024/01/14/gen-ai-artificial-intelligence-and-the-future-of-work-542379
PNAS Nexus (2024) — The Impact of Generative AI on Socioeconomic Inequalities and Policy Making academic.oup.com/pnasnexus/article/3/6/pgae191/7689236
Originally published at https://www.linkedin.com.


