Who's Running the Company in Ten Years?
AI is automating the pipeline that produces the people your AI strategy depends on
It is a Friday afternoon. A critical system is down. The deadline is Monday morning and the fix depends on a hardware component that needs to come from somewhere, fast. You do not open a procurement portal. You call your account manager. Not because they are the only person who could technically help. Because they know you, they know what is at stake, and they will move things that the process cannot.
That call gets answered because of a relationship built over years. Small interactions. Remembered context. The accumulated trust that comes from a vendor who showed up when it mattered before. It is not a complex enterprise software deal. It is a medium-sized business buying hardware from a value added reseller. The commercial stakes are real regardless. And when something goes wrong at the wrong moment, the relationship is the infrastructure.
That account manager started somewhere. They learned by doing. They developed judgment by navigating real clients, real problems, and real consequences. If the pipeline that produces them does not exist, neither does the relationship. And right now, a significant number of organisations are making decisions that quietly remove that pipeline, for reasons that look entirely rational on a spreadsheet.
The efficiency logic is straightforward. AI can handle data entry, first-pass research, report drafting, basic query resolution. Entry-level roles that once performed those tasks cost money and take time to onboard. The business case for replacing them with automation writes itself, and across the technology sector in particular, it is being written at scale. A SignalFire analysis of major public tech firms found a 50% decline in new role starts by people with less than one year of post-graduate experience between 2019 and 2024, consistent across sales, marketing, engineering, operations, finance, and legal. A survey of over a thousand enterprises found 91% reporting that roles are already changing or disappearing due to AI, with 66% expecting to slow entry-level hiring further.
Looked at in isolation, each of those decisions is defensible. Looked at over a decade, they describe an organisation that has quietly stopped producing its own future leaders.
The tasks being automated were never just tasks. They were the mechanism. The junior analyst who got a number wrong and had to explain it to a partner was not learning spreadsheets. They were learning accountability, professional consequence, and how an organisation responds when something goes wrong. The trainee whose draft came back covered in track changes was not learning to write. They were learning what good looked like in their field, from someone whose judgment had been built the same way. That is what produced capable professionals. Not the work itself. The friction of doing it imperfectly, under real conditions, alongside people who knew more than they did.
MIT’s Andrew McAfee put the question directly: how else are people going to learn to do the job except via on-the-job learning? That is how you learn to do difficult knowledge work, by helping somebody who is good at it with the routine stuff. Remove the routine stuff, and you do not just remove the cost. You remove the learning contract that the routine stuff made possible.
There is a term in the research for what gets lost: tacit knowledge. The practical understanding that cannot be fully written down, that lives in the judgment of experienced people, that transfers through proximity and repeated interaction rather than documentation. A project manager who adjusts a rollout plan based on team dynamics rather than just the timeline. A senior lawyer who knows when a clause that looks standard is actually a risk. A technology director who recognises a failure pattern before the diagnostics confirm it.
This knowledge is not built from a training programme. It accumulates through years of exposure to problems that do not have obvious answers, in environments where the consequences of getting it wrong are real. A peer-reviewed economics paper published earlier this year modelled exactly this dynamic, finding that AI-driven entry-level automation increases output on impact but can reduce long-run growth and welfare, precisely because novices acquire tacit knowledge by working alongside experts. Interrupt that transmission, and the knowledge does not transfer. It simply stops.
The contradiction sits in plain sight in almost every AI governance framework being written right now. Human in the loop. Subject matter expert review. Senior sign-off before the output is acted on. These are not optional clauses, they are the load-bearing assumption that makes responsible AI deployment possible. The policy says a qualified person will catch what the model gets wrong. The hiring plan says we are no longer developing qualified people at the beginning of their careers. Both documents exist in the same organisation. Rarely in the same conversation.
You cannot mandate expert oversight and simultaneously defund the pipeline that produces experts. The subject matter experts available for review today were junior employees a decade ago. The ones you will need in ten years are, right now, either starting their careers somewhere or not starting them at all. An AI governance framework that does not ask where its future reviewers are coming from is not a governance framework. It is a assumption dressed up as a policy.
The seniority cliff, as some researchers have termed it, is not about age. It is about the accumulation of thousands of solved problems, crises navigated, and decisions made under pressure. Stop hiring the people who would accumulate that experience, and in ten years you have senior job titles with nothing underneath them. The AI can surface the options. It cannot own the decision. And the person who needs to own it has to have learned how somewhere.
This is where the relationship capital argument and the pipeline argument converge. The account manager who picks up on a Friday afternoon exists because someone, years earlier, decided that developing junior commercial talent was worth the investment. The senior partner who can read a client well enough to know when the meeting is going badly before anyone has said so carries knowledge that no model can infer, because the model was never in the room when it was being built.
Research on trust in business-to-business relationships is consistent on this point: human touchpoints enable adaptation and long-term value creation that is unattainable when relationships are constrained to transactional efficiency. Buyers still spend the majority of their purchasing journey in self-directed research. The fraction of time they spend in direct contact with a vendor is where trust is either built or isn’t. That contact depends on a human being on the other end with enough accumulated judgment to make it worth having.
None of this is an argument against automation. The efficiency gains are real, and automating genuinely low-value repetitive work is rational. The argument is narrower than that. It is that the second-order cost of removing the developmental pipeline is not appearing in the business case. The saving is visible immediately. The deficit surfaces in a decade, when the organisation looks around for the senior people who should be running things and finds that the ladder they would have climbed no longer exists.
The organisations that will navigate the next decade well are not the ones that automate the most. They are the ones that are deliberate about what the automation changes, and intentional about replacing what it removes. That means asking, when you redesign a role around AI capability, what the role was also doing that is now missing. What mentorship was embedded in it. What judgment was being transferred. What relationships were being built.
AI can do a great deal. It can compress research, accelerate drafting, surface patterns, and handle queries at a scale no team could match. What it cannot do is pick up the phone on a Friday afternoon because it knows what is at stake and has the history to make the call matter.
That capability has to come from somewhere. Right now, a lot of organisations are making decisions that quietly ensure it will come from nowhere.
I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.
Sources & Further Reading
SignalFire (2025) - Entry-level hiring decline analysis. Reported CNBC, September 2025 - cnbc.com/2025/09/07/ai-entry-level-jobs-hiring-careers.html
IDC/Deel Survey (2025) - Enterprise entry-level hiring and pipeline data. ITPro, November 2025 - itpro.com/business/careers-and-training/enterprises-are-cutting-back-on-entry-level-roles-for-ai
Andrew McAfee, MIT (2026) - Talent pipelines and entry-level automation. Fortune, May 2026 - fortune.com/2026/05/01/automating-gen-z-entry-level-jobs-could-backfire-mit-ai-researcher-andrew-mcafee-talent-pipelines-at-risk/
Ide, E. (2026) - Automation, AI, and the Intergenerational Transmission of Knowledge. arXiv:2507.16078 - arxiv.org/pdf/2507.16078
Journal of Business & Industrial Marketing (2026) - AI and trust in B2B relationships. DOI: 10.1108/JBIM-12-2024-0936 - doi.org/10.1108/JBIM-12-2024-0936
California Management Review (2026) - Tacit Knowledge Is Your Next Competitive Moat - cmr.berkeley.edu/2026/03/tacit-knowledge-is-your-next-competitive-moat/
World Economic Forum (2025) - Future of Jobs Report 2025 - reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf


