The Workaround Was the Warning. AI Is the Megaphone
The ban never worked. Here's what does.
There is a story IT departments have been telling themselves for twenty years. It goes like this: employees use unauthorised tools because they don’t understand the risks. If we communicate the policy more clearly, enforce it more consistently, and block enough stuff, the problem will go away.
The data has never supported that story.
Gartner found that 41% of enterprise employees were already working outside IT oversight in 2022, and projects that figure will reach 75% by 2027. Shadow IT didn’t grow despite tighter controls. It grew alongside them. That’s not a compliance failure. That’s a signal, and most organisations spent two decades responding to it with the wrong answer.
The signal was simple: the tools we’re providing aren’t good enough for the work people are actually trying to do. The employee who used personal Dropbox wasn’t trying to undermine information security. They were trying to share a file with a client when the VPN was down and the deadline wasn’t moving. The WhatsApp group handling client updates wasn’t a governance failure. It was a faster answer to a problem the approved toolset couldn’t solve.
The wrong response to shadow IT is blanket prohibition. Locking everything down frustrates good employees, drives workarounds underground rather than eliminating them, and signals that the IT function exists to slow the business down rather than support it. Most organisations chose prohibition anyway. And now, with Shadow AI arriving at a scale that makes the Dropbox era look quaint, we are at serious risk of making the same mistake again.
A colleague told me recently that they’d read my writing and assumed I was an AI sceptic. My inner nerd was genuinely shocked, did they not read about my home AI lab, experiments with multiple models, and apprenticeship because I find this technology genuinely extraordinary. My inner director, on the other hand, felt quietly vindicated and grown-up, because asking hard questions about data handling, governance, and risk isn’t scepticism, it’s the job. There’s a version of the AI conversation that treats those questions as obstacles. I think that version is the riskier one.
98% of organisations now have employees using unsanctioned AI tools. 78% of workers bring their own AI tools to work. This isn’t a niche behaviour. It’s near-universal. The question isn’t how to stop it. It’s what it’s telling us about where the friction is, and what we’re failing to provide.
The arrival of genuine citizen development tools has changed the calculus in ways that make the old orthodoxy untenable. The traditional IT position, always buy, never build, because you can’t support what you didn’t commission, made sense when building meant developers, procurement cycles, and maintenance contracts. That world has genuinely shifted. A Forrester study found organisations using Microsoft Power Platform achieved 206% ROI over three years, with high-impact users saving up to 250 hours annually and app development time cut by 50%. These are not marginal gains. The tools have earned a place at the table.
But here is where the conversation gets uncomfortable. There are voices in the AI industry who have taken the legitimate case for citizen development and extended it into an argument for removing governance entirely. Get IT out of the way, move fast, procurement is just friction. The implicit message is that due diligence is timidity, and that professionals who ask hard questions about data handling and compliance are obstacles to progress rather than people doing their jobs. I’ve seen both failure modes up close, IT teams that used process as a moat, guarding their function more carefully than the data they were supposed to secure, and organisations pressured into rushing deployments that later surfaced serious problems. Gatekeeping is real, and so is the cost of absent governance. The answer isn’t to pick a side. It’s to ask what governance should actually look like when the tools have changed.
The starting point has to be following the data, not categorising the tool.
Consider two AI agents that do identical things. They join a meeting, generate a summary, draft follow-up actions, and distribute them to attendees. In a product planning session, the risk profile is manageable. In a meeting discussing vulnerable adults or children, the questions change entirely. Not just whether a human reviews the output, but what data is being processed, where it’s transmitted, under what data processing agreement, and whether the organisation has a lawful basis for sending that information to an external model at all. Anyone who has watched production software go live without proper scrutiny knows how this ends. The risk doesn’t disappear when you skip that conversation. It just becomes invisible until it isn’t.
The tool is identical. The data context changes everything. Governance has to follow the data, not the technology.
IBM’s 2025 Cost of a Data Breach Report found that organisations with high shadow AI exposure faced an additional average breach cost of $670,000, with 65% of incidents involving personally identifiable information. The Samsung case is instructive here, not because Samsung was careless, but because the incident illustrates how quickly well-intentioned employees can expose sensitive data when the approved route doesn’t exist and the unsanctioned one does. Three separate teams submitted proprietary source code and internal meeting recordings to ChatGPT within weeks of the company lifting a prior ban. The response, reimposing the ban, missed the point entirely. Security experts noted that banning specific tools one by one becomes whack-a-mole as new ones proliferate. The only sustainable answer is a sanctioned route that’s faster and safer than the shadow one.
Which brings us to the other failure mode: governance so slow it defeats itself.
IDC research, undertaken with Lenovo, found that 88% of AI proofs of concept never reach production, for every 33 pilots launched, only four go live. IDC’s own researchers acknowledged that many of these pilots are “highly underfunded” and lack a strong business case from the start, which means the problem isn’t just governance, it’s launching without clear purpose. But slow, undefined governance compounds it. Getting stuck in pilot purgatory is what happens when nobody defined what success looked like before the pilot started. The review runs indefinitely because there’s no decision to make, only a process to continue.
Gartner predicts 30% of GenAI projects will be abandoned after proof of concept, citing poor data quality, inadequate risk controls, and escalating costs. The pattern is consistent: organisations launch with enthusiasm and stall at the point where unglamorous structural work is required. That stalling recreates exactly the problem shadow IT diagnosed. If the sanctioned route takes eighteen months and produces no answer, people find another route. They always have.
The fix isn’t faster approval. It’s defined exit criteria before the pilot begins. Not “we’ll review in three months” but “here is what this project needs to demonstrate, here are the data questions it needs to answer, and here is the date by which we will decide.” That’s a decision process. What most organisations run instead is a review process, and review processes don’t end, they just lose momentum until something else takes priority.
Those exit criteria need the right people in the room: IT, the business owner, and whoever owns the data risk. Depending on the data context, legal or compliance too. That conversation, held before anything is built, is the governance model. Not a committee and not a checklist, but a conversation with accountability attached.
The IT teams that will navigate this well are not the ones that said yes to everything, or the ones that built walls around their function and called it risk management. They’re the ones that got curious about why their users kept going around them, and built something worth coming back to.
A KPMG survey found 73% of organisations adopting low-code platforms had not yet defined governance rules. That gap is where shadow AI lives. Close it not with prohibition but with a sanctioned environment that actually works: risk-proportionate governance, fast and transparent pathways from experiment to production, and a clear signal to the organisation that IT is a partner in building things, not a gatekeeper deciding who gets to try.
Shadow IT was never really about the tools. It was about unmet need meeting inadequate response. Shadow AI is the same conversation, with higher stakes and less time to get it right. The writing has always been on the wall. The question is whether we’re finally ready to read it.
This post is part of an ongoing series on AI, technology, and the gap between what we are promised and what we are building.
I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.
References
Gartner (2022), Shadow IT and Employee Technology Use gartner.com
Microsoft / LinkedIn Work Trend Index (2024), AI at Work microsoft.com
Forrester Consulting (2024), Total Economic Impact of Microsoft Power Apps forrester.com
IDC / Lenovo (2024), AI Proof of Concept to Production Research idc.com
IBM (2025), Cost of a Data Breach Report ibm.com/security/data-breach
Gartner (2025), GenAI Project Abandonment Predictions gartner.com
KPMG (2023) Shaping Digital Transformation with Low-Code Platforms assets.kpmg.com/content/dam/kpmg/ie/pdf/2023/07/ie-shaping-digital-transformation-with-low-code-platforms.pdf
Dark Reading / Gizmodo / TechCrunch (2023), Samsung ChatGPT Data Leak darkreading.com


