Politicians Discovered the Internet. This Is Going Badly.
Everyone's identity. Unregulated third parties. What could go wrong.
Something curious has happened in legislatures across the world. Politicians appear to have recently discovered that the internet is not always a nice place. There is a sudden urgency, a wave of bills, consultations, frameworks and mandates, all radiating the energy of people who have just encountered a problem for the first time. The problem, of course, has been there for twenty years. The question worth asking is not why they are acting now. It is why the answer they have all landed on is the same one: make everyone prove who they are.
The goal is framed as child protection, and I want to be clear that protecting children online is a genuinely important objective. But good intentions do not make bad policy good. And the policy being built right now is not designed around what actually harms children online. It is designed around what is easiest to legislate, cheapest to implement, and most convenient for everyone involved, except the people who actually use the internet.
The proposals follow a consistent pattern across jurisdictions: passport uploads, credit card checks, facial age estimation, government ID verification through third-party commercial providers. They vary in detail but share a fundamental flaw. Every one of them creates a linkable record. A commercial verification provider now knows that a specific individual accessed a specific type of content at a specific time. That data has commercial value. Under the right legal circumstances, it has governmental value. We are being asked to trade privacy for access, and told it is for the children.
The breach risk is real and well-documented, identity verification providers have suffered widely reported incidents involving exposed credentials, government ID images and personal records, sometimes at a scale of hundreds of millions of records. But focusing on breach risk actually understates the problem. Even a perfectly secure age verification system, one that never leaked a single record, would still be the wrong approach. Because the issue is not whether the data is kept safe. The issue is that the data exists at all.
Every record created is a surveillance artefact: a log of who accessed what, when, verified against a government identity. That infrastructure, once built, does not stay limited to its original purpose. It gets used as evidence in proceedings its creators never envisioned. It gets sold. It gets repurposed by the next administration, or the one after that. History is consistent on this point. You do not build surveillance infrastructure in a democracy and find that it only ever gets used for its stated purpose.
There is a pattern worth naming in how these laws are written. They mandate that age verification must be effective. They specify that checks must be robust, that compliance must be demonstrable, that outcomes must be measurable. What they rarely if ever mandate is that the systems doing this work must be secure.
The UK’s own GOV.UK One Login, the flagship government digital identity system now underpinning the digital driving licence, backed by over £300 million of public money, illustrates this precisely. A whistleblower raised serious security concerns in July 2022, shortly after the system went live. According to reporting by Computer Weekly and The Telegraph, those concerns included: development work outsourced to Romania without the knowledge or approval of the then-GDS chief executive, and without consultation with the National Cyber Security Centre; over 10,000 vulnerabilities rated critical or high severity; staff without the required security clearance accessing the live production environment over 6,000 times in a single month. The whistleblower was subsequently informed he faced disciplinary action for raising these concerns.
The government’s response to Parliament omitted any mention of the NCSC warnings or the Cabinet Office Data Protection Officer’s demand, made in November 2022, that the system be suspended. One Login lost its Digital Identity Trust Framework certification. It remains operational and is being expanded.
This is what it looks like when governments mandate identity infrastructure without understanding what they are building. The political pressure is to announce. The pressure to make it actually secure is absent, underfunded, or actively suppressed when it becomes inconvenient.
There is a better technical approach, and it has existed as a proven cryptographic concept for years: Zero-Knowledge Proof.
The idea is more straightforward than the name suggests. A website that needs to verify your age redirects you to a government authentication platform. You prove your identity there. The platform returns a single token, nothing more than “over 18: yes.” In a well-designed system, the government platform never learns which website you visited, and the website never learns who you are. No profile. No record. No artefact.
This is not science fiction. The technology is mature. Estonia has been running sophisticated digital identity infrastructure on similar principles for over two decades. The technical barrier is not the problem.
So why hasn’t this approach been seriously pursued? There are two honest explanations, and both deserve consideration.
The first is observation. Governments have spent years in legal conflict with technology companies over end-to-end encryption, consistently arguing that law enforcement needs access. The pattern of mandating identity verification systems that generate linkable records, while simultaneously resisting privacy-preserving alternatives, is consistent with an interest in knowing what citizens do online. That may sound conspiratorial. I am not claiming it is the dominant motive. But it is a coherent explanation for why the architecture being built looks the way it does.
The second explanation is simpler, and I think more likely: cost. Zero-Knowledge Proof infrastructure is harder to build and more expensive to implement than outsourcing verification to a commercial third party. The commercial identity verification industry exists, is available now, and is willing to absorb the implementation complexity in exchange for access to the data. Governments get a policy outcome they can announce. Industry gets a new mandated market and a commercially valuable data asset. The cost, the erosion of everyone’s privacy, is paid by citizens who had no say in the arrangement.
Peer-reviewed research published in 2024 found that age verification practices are inadequate precisely because official mandates lack technical guidance. Governments are specifying what must happen and leaving the how to an industry with a strong financial interest in building systems that collect data. The industry most incentivised to build surveillance-based verification is also the one being handed the brief.
Nothing illustrates this mindset more clearly than what has happened around VPNs.
A YouGov survey cited as evidence of public appetite for action asked whether under-18s should be restricted from using VPNs. The question defined a VPN as “a tool that hides a user’s internet activity and location, often used for privacy or bypassing content restrictions.” A neutral definition might have read: “a tool that encrypts your internet connection, widely used by businesses, journalists, and academics.” Same technology. Entirely different mental image. The demographic data compounds the problem: the age groups most likely to support restrictions were statistically the same groups least likely to know what a VPN is, a separate YouGov survey found only 47% of those aged 55 and over know what the acronym stands for. A quarter of respondents declined to answer at all.
The result, 55% in favour of restrictions, has since been used to justify a policy direction that extends well beyond the original question. In Wisconsin, a bill requiring websites to block all VPN users passed the State Assembly 69 votes to 22 before the provision was stripped following public backlash. In Michigan, a proposal would require ISPs to actively monitor and block VPN connections entirely. In the UK, the Children’s Commissioner has called VPNs “a loophole that needs closing” and the Prime Minister has confirmed the government is considering restrictions following consultation.
VPNs are not a loophole. They are standard security infrastructure used by business, university students and academics, journalists, and domestic abuse survivor hiding their location. The question the legislation never asks is why a content restriction approach is so inadequate that a freely available privacy tool defeats it entirely.
There is also a practical problem that appears not to have been considered. VPN legislation does not respect borders any more than VPNs do. A website cannot determine where a VPN connection originates, that is the point. Any site seeking to comply with the most restrictive law in any jurisdiction has no practical option but to block all VPN traffic, everywhere. A single state legislature in Wisconsin was, inadvertently, drafting policy with global consequences for every VPN user on the planet.
It is easy for governments to announce bans, restrict privacy tools and legislate against security infrastructure, and to frame anyone who objects as someone who does not want to protect children. That framing is not only unfair. It entirely misses the point.
Protecting children online matters. The harms are real and documented. But children are not damaged by platforms knowing their age. They are damaged by algorithmic amplification of harmful content, by design patterns engineered for compulsive engagement, and by inadequate moderation of abuse. Those are engineering and governance problems. Age verification does not address any of them. It is cheaper and easier to mandate than the solutions that would actually work, so that is what gets mandated.
We have better tools. Zero-Knowledge Proof exists. Privacy-preserving digital identity infrastructure exists and has been deployed at national scale. What is missing is not the technology. It is the political will to spend the money, do the harder work, and resist the commercial interests that profit from the current approach.
The chilling effect of surveillance infrastructure on legal behaviour is well-evidenced, research going back to the Snowden revelations documents how people self-censor, stop searching for sensitive information, and withdraw from online discourse when they believe they are being watched. Once you establish the principle that identity must be produced to access online content, that boundary moves in one direction. The open internet, where anyone could seek information without declaring who they are, has been one of the most democratising forces in modern life. Dismantling it in the name of child protection, using systems that do not protect children, while the people raising security concerns get disciplined for doing so, is not a policy. It is an abdication of one.
The question nobody seems to be asking is the obvious one: how many people have to hand their identity to unregulated third parties, and how many breaches have to happen, before someone admits that this approach is neither child protection nor data protection, and never was?
I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.
Sources & Further Reading
Computer Weekly (2025), Government faces claims of serious security and data protection problems in One Login digital ID, computerweekly.com/news/366622533
The Telegraph / Yahoo News (2025), Government digital ID system put citizens’ data at risk, yahoo.com/news/government-digital-id-system-put-114120788.html
Datamation (2025), UK Digital ID Card Launch Gets Hostile Reception, datamation.com/security/uk-digital-id-cards
Electronic Frontier Foundation (2024), Hack of Age Verification Company Shows Privacy Danger of Social Media Laws, eff.org/deeplinks/2024/06/hack-age-verification-company-shows-privacy-danger-social-media-laws
Electronic Frontier Foundation (2025), The Year States Chose Surveillance Over Safety, eff.org/deeplinks/2025/12/year-states-chose-surveillance-over-safety-2025-review
Electronic Frontier Foundation (2025), Lawmakers Want to Ban VPNs, And They Have No Idea What They’re Doing, eff.org/deeplinks/2025/11/lawmakers-want-ban-vpns-and-they-have-no-idea-what-theyre-doing
Electronic Frontier Foundation (2026), EFF to Wisconsin Legislature: VPN Bans Are Still a Terrible Idea, eff.org/deeplinks/2026/02/eff-wisconsin-legislature-vpn-bans-are-still-terrible-idea
TechRadar (2026), Wisconsin scraps VPN ban from age verification bill following backlash, techradar.com/vpn/vpn-privacy-security/wisconsin-scraps-vpn-ban-from-age-verification-bill-following-backlash
Renaud, K. et al. (2024), Online Age Verification: Government Legislation, Supplier Responsibilization, and Public Perceptions, PMC/MDPI, pmc.ncbi.nlm.nih.gov/articles/PMC11429505
Internet Society (2024), Age Verification Law Weakens Internet Privacy and Security, internetsociety.org/blog/2024/09/texas-mandatory-age-verification-law-will-weaken-privacy-and-security-on-the-internet
Büchi, M., Festic, N. & Latzer, M. (2022), The Chilling Effects of Digital Dataveillance, journals.sagepub.com/doi/10.1177/20539517211065368
YouGov (2024), VPN Awareness Survey, yougov.co.uk
YouGov (2025), Under-18 VPN Restriction Survey, yougov.co.uk
The Conversation (2025), Online age checking is creating a treasure trove of data for hackers, theconversation.com/online-age-checking-is-creating-a-treasure-trove-of-data-for-hackers-268586


