<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Jonathan Freedman]]></title><description><![CDATA[AI, cybersecurity, and tech without the hype]]></description><link>https://www.jonathanfreedman.me</link><generator>Substack</generator><lastBuildDate>Wed, 13 May 2026 19:25:30 GMT</lastBuildDate><atom:link href="https://www.jonathanfreedman.me/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jonathan Freedman]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[jonathanfreedmanme@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[jonathanfreedmanme@substack.com]]></itunes:email><itunes:name><![CDATA[Jonathan Freedman]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jonathan Freedman]]></itunes:author><googleplay:owner><![CDATA[jonathanfreedmanme@substack.com]]></googleplay:owner><googleplay:email><![CDATA[jonathanfreedmanme@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jonathan Freedman]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Who's Running the Company in Ten Years?]]></title><description><![CDATA[AI is automating the pipeline that produces the people your AI strategy depends on]]></description><link>https://www.jonathanfreedman.me/p/whos-running-the-company-in-ten-years</link><guid isPermaLink="false">https://www.jonathanfreedman.me/p/whos-running-the-company-in-ten-years</guid><dc:creator><![CDATA[Jonathan Freedman]]></dc:creator><pubDate>Fri, 08 May 2026 12:01:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gqek!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gqek!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gqek!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!gqek!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!gqek!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!gqek!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gqek!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7255937,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jonathanfreedman.me/i/196883051?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gqek!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!gqek!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!gqek!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!gqek!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71492c25-80a3-4eb4-9e50-f49bfef93696_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It is a Friday afternoon. A critical system is down. The deadline is Monday morning and the fix depends on a hardware component that needs to come from somewhere, fast. You do not open a procurement portal. You call your account manager. Not because they are the only person who could technically help. Because they know you, they know what is at stake, and they will move things that the process cannot.</p><p>That call gets answered because of a relationship built over years. Small interactions. Remembered context. The accumulated trust that comes from a vendor who showed up when it mattered before. It is not a complex enterprise software deal. It is a medium-sized business buying hardware from a value added reseller. The commercial stakes are real regardless. And when something goes wrong at the wrong moment, the relationship is the infrastructure.</p><p>That account manager started somewhere. They learned by doing. They developed judgment by navigating real clients, real problems, and real consequences. If the pipeline that produces them does not exist, neither does the relationship. And right now, a significant number of organisations are making decisions that quietly remove that pipeline, for reasons that look entirely rational on a spreadsheet.</p><div><hr></div><p>The efficiency logic is straightforward. AI can handle data entry, first-pass research, report drafting, basic query resolution. Entry-level roles that once performed those tasks cost money and take time to onboard. The business case for replacing them with automation writes itself, and across the technology sector in particular, it is being written at scale. A SignalFire analysis of major public tech firms found a 50% decline in new role starts by people with less than one year of post-graduate experience between 2019 and 2024, consistent across sales, marketing, engineering, operations, finance, and legal. A survey of over a thousand enterprises found 91% reporting that roles are already changing or disappearing due to AI, with 66% expecting to slow entry-level hiring further.</p><p>Looked at in isolation, each of those decisions is defensible. Looked at over a decade, they describe an organisation that has quietly stopped producing its own future leaders.</p><p>The tasks being automated were never just tasks. They were the mechanism. The junior analyst who got a number wrong and had to explain it to a partner was not learning spreadsheets. They were learning accountability, professional consequence, and how an organisation responds when something goes wrong. The trainee whose draft came back covered in track changes was not learning to write. They were learning what good looked like in their field, from someone whose judgment had been built the same way. That is what produced capable professionals. Not the work itself. The friction of doing it imperfectly, under real conditions, alongside people who knew more than they did.</p><p>MIT&#8217;s Andrew McAfee put the question directly: how else are people going to learn to do the job except via on-the-job learning? That is how you learn to do difficult knowledge work, by helping somebody who is good at it with the routine stuff. Remove the routine stuff, and you do not just remove the cost. You remove the learning contract that the routine stuff made possible.</p><div><hr></div><p>There is a term in the research for what gets lost: tacit knowledge. The practical understanding that cannot be fully written down, that lives in the judgment of experienced people, that transfers through proximity and repeated interaction rather than documentation. A project manager who adjusts a rollout plan based on team dynamics rather than just the timeline. A senior lawyer who knows when a clause that looks standard is actually a risk. A technology director who recognises a failure pattern before the diagnostics confirm it.</p><p>This knowledge is not built from a training programme. It accumulates through years of exposure to problems that do not have obvious answers, in environments where the consequences of getting it wrong are real. A peer-reviewed economics paper published earlier this year modelled exactly this dynamic, finding that AI-driven entry-level automation increases output on impact but can reduce long-run growth and welfare, precisely because novices acquire tacit knowledge by working alongside experts. Interrupt that transmission, and the knowledge does not transfer. It simply stops.</p><p>The contradiction sits in plain sight in almost every AI governance framework being written right now. Human in the loop. Subject matter expert review. Senior sign-off before the output is acted on. These are not optional clauses, they are the load-bearing assumption that makes responsible AI deployment possible. The policy says a qualified person will catch what the model gets wrong. The hiring plan says we are no longer developing qualified people at the beginning of their careers. Both documents exist in the same organisation. Rarely in the same conversation.</p><p>You cannot mandate expert oversight and simultaneously defund the pipeline that produces experts. The subject matter experts available for review today were junior employees a decade ago. The ones you will need in ten years are, right now, either starting their careers somewhere or not starting them at all. An AI governance framework that does not ask where its future reviewers are coming from is not a governance framework. It is a assumption dressed up as a policy.</p><p>The seniority cliff, as some researchers have termed it, is not about age. It is about the accumulation of thousands of solved problems, crises navigated, and decisions made under pressure. Stop hiring the people who would accumulate that experience, and in ten years you have senior job titles with nothing underneath them. The AI can surface the options. It cannot own the decision. And the person who needs to own it has to have learned how somewhere.</p><div><hr></div><p>This is where the relationship capital argument and the pipeline argument converge. The account manager who picks up on a Friday afternoon exists because someone, years earlier, decided that developing junior commercial talent was worth the investment. The senior partner who can read a client well enough to know when the meeting is going badly before anyone has said so carries knowledge that no model can infer, because the model was never in the room when it was being built.</p><p>Research on trust in business-to-business relationships is consistent on this point: human touchpoints enable adaptation and long-term value creation that is unattainable when relationships are constrained to transactional efficiency. Buyers still spend the majority of their purchasing journey in self-directed research. The fraction of time they spend in direct contact with a vendor is where trust is either built or isn&#8217;t. That contact depends on a human being on the other end with enough accumulated judgment to make it worth having.</p><p>None of this is an argument against automation. The efficiency gains are real, and automating genuinely low-value repetitive work is rational. The argument is narrower than that. It is that the second-order cost of removing the developmental pipeline is not appearing in the business case. The saving is visible immediately. The deficit surfaces in a decade, when the organisation looks around for the senior people who should be running things and finds that the ladder they would have climbed no longer exists.</p><div><hr></div><p>The organisations that will navigate the next decade well are not the ones that automate the most. They are the ones that are deliberate about what the automation changes, and intentional about replacing what it removes. That means asking, when you redesign a role around AI capability, what the role was also doing that is now missing. What mentorship was embedded in it. What judgment was being transferred. What relationships were being built.</p><p>AI can do a great deal. It can compress research, accelerate drafting, surface patterns, and handle queries at a scale no team could match. What it cannot do is pick up the phone on a Friday afternoon because it knows what is at stake and has the history to make the call matter.</p><p>That capability has to come from somewhere. Right now, a lot of organisations are making decisions that quietly ensure it will come from nowhere.</p><div><hr></div><p><em>I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.</em></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/p/whos-running-the-company-in-ten-years?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/p/whos-running-the-company-in-ten-years?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><strong>Sources &amp; Further Reading</strong></p><p>SignalFire (2025) - Entry-level hiring decline analysis. Reported CNBC, September 2025 - <a href="https://cnbc.com/2025/09/07/ai-entry-level-jobs-hiring-careers.html">cnbc.com/2025/09/07/ai-entry-level-jobs-hiring-careers.html</a></p><p>IDC/Deel Survey (2025) - Enterprise entry-level hiring and pipeline data. ITPro, November 2025 - <a href="https://itpro.com/business/careers-and-training/enterprises-are-cutting-back-on-entry-level-roles-for-ai">itpro.com/business/careers-and-training/enterprises-are-cutting-back-on-entry-level-roles-for-ai</a></p><p>Andrew McAfee, MIT (2026) - Talent pipelines and entry-level automation. Fortune, May 2026 - <a href="https://fortune.com/2026/05/01/automating-gen-z-entry-level-jobs-could-backfire-mit-ai-researcher-andrew-mcafee-talent-pipelines-at-risk/">fortune.com/2026/05/01/automating-gen-z-entry-level-jobs-could-backfire-mit-ai-researcher-andrew-mcafee-talent-pipelines-at-risk/</a></p><p>Ide, E. (2026) - Automation, AI, and the Intergenerational Transmission of Knowledge. arXiv:2507.16078 - <a href="https://arxiv.org/pdf/2507.16078">arxiv.org/pdf/2507.16078</a></p><p>Journal of Business &amp; Industrial Marketing (2026) - AI and trust in B2B relationships. DOI: 10.1108/JBIM-12-2024-0936 - <a href="https://doi.org/10.1108/JBIM-12-2024-0936">doi.org/10.1108/JBIM-12-2024-0936</a></p><p>California Management Review (2026) - Tacit Knowledge Is Your Next Competitive Moat - <a href="https://cmr.berkeley.edu/2026/03/tacit-knowledge-is-your-next-competitive-moat/">cmr.berkeley.edu/2026/03/tacit-knowledge-is-your-next-competitive-moat/</a></p><p>World Economic Forum (2025) - Future of Jobs Report 2025 - <a href="https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf">reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf</a></p>]]></content:encoded></item><item><title><![CDATA[Phonics for AI]]></title><description><![CDATA[Knowing which buttons to press is not a skill. It's a starting point.]]></description><link>https://www.jonathanfreedman.me/p/phonics-for-ai</link><guid isPermaLink="false">https://www.jonathanfreedman.me/p/phonics-for-ai</guid><dc:creator><![CDATA[Jonathan Freedman]]></dc:creator><pubDate>Tue, 05 May 2026 07:50:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8H3W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8H3W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8H3W!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!8H3W!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!8H3W!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!8H3W!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8H3W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1330927,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jonathanfreedman.me/i/196391529?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8H3W!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!8H3W!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!8H3W!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!8H3W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424f3ebe-e9ba-45c2-8331-2669067a44fa_1408x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There is a number that should give anyone building an AI training programme pause. In England, 6.6 million working-age adults have very poor literacy skills. Not illiteracy in the absolute sense. People in this category can read and they can follow familiar text. What they struggle with is everything that comes after decoding the words: inference, evaluation, spotting what is missing, reading something and asking whether it is actually true.</p><p>This is the result of decades of literacy instruction that measured only one thing. We taught phonics and we measured whether people could decode text. We built the floor and called it the ceiling. The result is a growing category of people that can technically read but are significantly less equipped to do what reading is actually for.</p><p>We are about to make the same mistake with AI, and we are making it right now.</p><div><hr></div><p>Ask most organisations what AI literacy means and they will describe something that amounts to a single question: can your employees use the tool? That is what most corporate training programmes measure. It is also, almost exactly, the equivalent of teaching someone to sound out words and calling them literate.</p><p>The evidence for the gap is not hard to find. A survey of over 500 enterprise leaders, conducted by YouGov earlier this year, found that 88% consider data and AI literacy important or very important for day-to-day work. Only 42% provide structured training for it. That is a significant gap. But more telling is what leaders identify as missing when they describe the problem. It is not prompting skill. It is the ability to turn information into decisions. The ability to evaluate what AI produces rather than simply accept it. The ability to know when the output is wrong.</p><p>That is not a training gap. That is a judgment gap. And it sits at a completely different level of capability than knowing how to open Copilot.</p><div><hr></div><p>Microsoft's researchers gave this problem a precise name. They describe a shift in how people work with AI as a move from "thinking by doing" to "choosing from outputs." Writing a document is thinking by doing. Prompting AI to write it and selecting from what comes back is choosing from outputs. The first builds judgment. The second, if it becomes habitual without the right supporting skills, erodes it.</p><p>The World Economic Forum's Future of Jobs report, drawing on data from over a thousand companies across 55 economies, identified analytical thinking as the top skill employers consider essential, with seven out of ten companies citing it. Roles that explicitly require AI skills are nearly twice as likely to also require analytical thinking, resilience, and digital literacy. The market is not rewarding prompting. It is rewarding the judgment you bring to what the prompting produces.</p><p>A 2025 Microsoft Research and Carnegie Mellon University study of 319 knowledge workers found that higher confidence in AI was associated with less critical thinking, while higher self-confidence was associated with more. Trust the tool more, scrutinise the output less. It is not a loop that closes in your favour.</p><div><hr></div><p>The tools themselves are beginning to make that measurement even more redundant. AI platforms across professional sectors now ship with built-in prompt improvers, the product generates a well-formed prompt from your rough instruction, so you never need to write one yourself. Prompting is being automated away. But this does not reduce the need for judgment, it adds to it. You now need to evaluate whether the generated prompt actually captures what you needed to ask, and then whether the output it produced is accurate, contextually appropriate, and safe to act on. Two evaluation steps where there used to be one. The prompt improver handles the syntax. It has no opinion on whether you asked the right question.</p><p>Which brings me to something I have been thinking about as a way of mapping where most organisations actually are, and where they need to be. There are four levels of AI capability that actually matter. They are not a framework to certify or a ladder to sell. They are a lens for seeing what is missing.</p><p></p><p><strong>Level 1: Can you get an answer?</strong></p><p>You can use the tool. You can construct a prompt that returns something useful. You know which interface suits which task. This is where almost all current AI training stops. It is necessary. It is not sufficient. It is phonics.</p><p></p><p><strong>Level 2: Can you tell if the answer is any good?</strong></p><p>You can evaluate what came back. You can identify when an output is plausible but wrong, when the confidence of the response does not match its reliability, when something is absent that should be present. You know enough about the domain to ask the question the AI did not anticipate. This is where domain expertise becomes the multiplier. You cannot evaluate an output in a field you do not understand. This is also, precisely, the level that the enterprise leaders above are describing when they say their people cannot turn information into decisions. They do not lack Level 1. They lack Level 2.</p><p></p><p><strong>Level 3: Can you build on it?</strong></p><p>You can take AI output and synthesise it with your own knowledge, your contextual judgment, and the things the model cannot know. You produce something neither you nor the AI could have produced alone. You understand where the model's competence ends and yours begins, and you work at that edge deliberately. This is the level where AI genuinely amplifies rather than substitutes. The solicitor who understands contract law and uses AI to accelerate document review. The analyst who surfaces patterns with AI and then interrogates them with domain knowledge. Expertise first. AI as the multiplier.</p><p></p><p><strong>Level 4: Do you know when to put it down?</strong></p><p>You can identify the tasks where AI involvement produces confident-sounding error rather than useful output. Where the cost of a plausible-but-wrong answer exceeds the benefit of speed. Where the process of working through something yourself is the point, not an inefficiency to be engineered away. Where the decision requires a human who is genuinely accountable rather than a human who chose from outputs.</p><p>This is the level no vendor will put in their training programme. It is also the level that makes everything else honest. A maturity model that stops at Level 3 is a competency ladder any platform can sell. Level 4 is the reason this one is not.</p><p>But the model only works if you are building something to bring to it. Levels 2, 3 and 4 are not skills you acquire once and carry forward. They are capacities that have to be actively maintained, through continued learning in your field, through exposure to hard problems, through the kind of work that does not have an obvious answer and cannot be resolved by asking a tool. Domain expertise is not the precondition for using AI well. It is the ongoing condition. The moment you stop developing it, the levels above Level 1 start to erode, regardless of how fluent your prompting becomes.</p><p>This is the part of the conversation that the AI skills industry has the least interest in having. A training platform can sell you a course on prompting. It cannot sell you the ten years of professional judgment that makes the prompting worth anything. That judgment is built the way it has always been built, through work, through failure, through the slow accumulation of knowing what good looks like in your field. AI does not replace that process. For anyone who stops doing it, AI does not replace what is lost either.</p><p>Recently I was using an AI assistant to diagnose a firewall permissions error. Its suggested fix was to allow all traffic through the firewall. When I pointed out the security flaw, it responded: "Good catch, that would have been a major vulnerability." The tool required my judgment to save it from itself, and then congratulated me for doing so. That is not a tool supporting your expertise. That is a tool that depends on it.</p><div><hr></div><p>The reading parallel runs deeper than it might seem. The National Literacy Trust notes that adults with poor literacy are significantly less likely to report good health, civic participation, and life satisfaction. I believe we will see similar trends and differences between those with analytical skills and critical judgment, and those without. The consequences of stopping at decoding are not confined to the workplace,  they compound across a life.</p><p>I have written elsewhere about what the cognitive research shows happens when that judgment stops being exercised. The direction of travel is consistent and it is not encouraging. The floor is being built at the same time as the foundation beneath it is being quietly removed.</p><div><hr></div><p>I want to be precise about what I am and am not arguing here.</p><p>AI matters, and prompting matters. Learning to use these tools well is genuinely valuable and the organisations that do it badly will be at a disadvantage. I use multiple AI models every day across multiple contexts and the capability difference between someone who can work with these tools and someone who cannot is real and growing.</p><p>However, prompting is to AI what reading is to knowledge work. It is the entry point, not the destination.</p><p>The question worth asking is not whether your people can get an answer. It is whether they can tell if the answer is any good. Whether they can build something better from it. And whether they know, when the stakes are high enough, to put it down and think for themselves.</p><div><hr></div><p>I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/p/phonics-for-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/p/phonics-for-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><p><strong>Sources &amp; Further Reading</strong></p><p></p><p>National Literacy Trust (2024) &#8212; Adult Literacy Rates in the UKliteracytrust.org.uk/parents-and-families/adult-literacy</p><p>OECD PIAAC (2023) &#8212; Survey of Adult Skills: England (United Kingdom)oecd.org/en/publications/survey-of-adults-skills-2023-country-notes</p><p>Microsoft Research (2025) &#8212; New Future of Work Report 2025microsoft.com/en-us/research/publication/new-future-of-work-report-2025</p><p>Lee, H-P. et al. (2025) &#8212; The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers. Microsoft Research and Carnegie Mellon University. CHI '25, ACM.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf</p><p>World Economic Forum (2025) &#8212; Future of Jobs Report 2025reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf</p><p>DataCamp / YouGov (2026) &#8212; The 2026 State of Data and AI Literacy Reportdatacamp.com/blog/the-state-of-data-and-ai-literacy-in-2026[Note: DataCamp is a training platform with a commercial interest in the findings. The YouGov fieldwork methodology is disclosed. Statistics used directionally.]</p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Attacks Move at Machine Speed. We Need More Time.]]></title><description><![CDATA[Our existing defences still work. What&#8217;s changed is the job they now have to do.]]></description><link>https://www.jonathanfreedman.me/p/ai-attacks-move-at-machine-speed</link><guid isPermaLink="false">https://www.jonathanfreedman.me/p/ai-attacks-move-at-machine-speed</guid><dc:creator><![CDATA[Jonathan Freedman]]></dc:creator><pubDate>Fri, 24 Apr 2026 09:44:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VGUe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VGUe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VGUe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!VGUe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!VGUe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!VGUe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VGUe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5678968,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jonathanfreedman.me/i/195329968?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VGUe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!VGUe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!VGUe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!VGUe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6bca6fd2-9821-48b3-af33-673a3cc72ba5_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The headlines about Claude Mythos have been striking. An AI that can complete a 32-step corporate network attack simulation end-to-end. A model finding new software vulnerabilities in codebases that survived decades of expert review. If you work in security, or just follow the technology closely, it would be easy to read those stories and conclude that it&#8217;s game over. That the attackers have won. That nothing we build can hold.</p><p>That conclusion is wrong. And I think it&#8217;s worth being direct about that, because the doomer framing is both inaccurate and genuinely harmful. It leads organisations to fatalism when what the moment actually calls for is action.</p><p>What frontier AI changes is speed. Not the fundamental nature of how network attacks work, but the velocity at which they happen. A human attacker moving manually through a network, probing systems, identifying high-value targets, escalating access, used to take hours. CrowdStrike&#8217;s 2026 Global Threat Report records the average time between initial access and an attacker moving to high-value targets elsewhere in the network at 29 minutes. The fastest recorded case in 2025 was 27 seconds.</p><p>The security controls we already have haven&#8217;t stopped working. What&#8217;s changed is the job they&#8217;re now being asked to do.</p><p>When the attacker was human, layered security controls were designed to create enough friction that they&#8217;d just give up and move to an easier target. An isolated network segment, an account with limited privileges, a system requiring explicit authentication: none of these are impenetrable, but together they made the effort not worth it. That logic still applies. Against an autonomous AI agent, the same controls serve a different purpose: not to make the attacker give up, but to slow a machine-speed attack down enough that human defenders have a chance to respond.</p><p>That shift in purpose, from deterrence to delay, is the entire argument of this article. The long-term answer to machine-speed attacks is machine-speed defence, and that tooling is developing. In the meantime, the architecture we already know how to build is more important than it has ever been.</p><div><hr></div><p>Zero Trust is not a new concept. &#8220;Never trust, always verify&#8221;, the idea that no user, device, or system should be implicitly trusted just because it&#8217;s inside the network, has been the theoretical gold standard for enterprise security for years. Microsegmentation, application control, privileged access management, replacing legacy VPN with more granular access tools: these have been on roadmaps, in strategy documents, and in conference presentations for most of the last decade.</p><p>They&#8217;re also genuinely hard to implement. That&#8217;s not a criticism of anyone. Legacy infrastructure makes this difficult, with application dependencies that are complex and often poorly documented. Microsegmentation projects, which divide a network into smaller isolated zones so a breach in one can&#8217;t spread freely to others, require buy-in across teams that don&#8217;t always collaborate: network teams, application owners, security, operations. Privileged access management done properly touches every system in the estate. Replacing a VPN means retiring infrastructure that works, in favour of something new, with all the business friction that entails.</p><p>Gartner&#8217;s 2025 Market Guide estimates that fewer than 5% of enterprises pursuing Zero Trust have implemented microsegmentation. That&#8217;s not negligence. It&#8217;s a rational response to a cost-benefit calculation that, until recently, made the complexity hard to justify.</p><p>Those barriers haven&#8217;t disappeared. But the risk of not acting is moving into a different category. When the threat model assumed a human attacker moving at human speed, a detect-and-respond model could work, you had time, and good monitoring could compensate for imperfect architecture. When the attacker is an autonomous AI agent, the enforcement has to be built in. Detection and response are still essential, but they need something to buy them time.</p><div><hr></div><p>These controls won&#8217;t stop a determined AI-powered attack indefinitely. but that&#8217;s not the job. They slow machine-speed attacks down to something human defenders can detect and respond to, by removing the paths of least resistance that autonomous agents depend on.</p><p><strong>Microsegmentation</strong></p><p>82% of intrusions in 2025 required no malware at all, attackers moved through networks using stolen credentials and legitimate tools, exploiting the fact that most enterprise networks let a compromised system reach adjacent ones freely. Microsegmentation removes that open floor plan: the network is divided into isolated zones, and every connection between them requires explicit authorisation. An agent that breaches one endpoint finds itself contained, unable to reach the next system without a policy that specifically permits it.</p><p><strong>Privileged Access Management</strong></p><p>Palo Alto&#8217;s 2025 research found that 66% of social engineering attacks specifically target privileged accounts, the admin credentials that can access any system, because an attacker who obtains them has, in practical terms, already won. PAM changes that by eliminating standing privileges: elevated access is granted just-in-time for a specific task and expires, so a stolen credential carries no power until someone explicitly requests it. The same principle needs to extend to machine identities, service accounts, API keys, automation scripts, which now outnumber human identities 82 to 1 and are almost entirely unmanaged.</p><p><strong>Managed Endpoints and Session Hygiene</strong></p><p>Infostealers processed 51.7 million packages of stolen credentials in 2025, up 72% year on year, and what makes them particularly dangerous is that they capture live session tokens, the authenticated state a browser holds to keep you logged in, which allows an attacker to bypass two-factor authentication entirely without ever knowing your password. The primary source of exposure is unmanaged devices: personal laptops and AI assistants running in the same browser session as authenticated work applications, invisible to IT and ungoverned by any security policy. Managed browser profiles, conditional access policies, and short session lifetimes won&#8217;t eliminate the risk, but they shrink the window of usefulness for any token that is stolen.</p><p><strong>Replacing Legacy VPN</strong></p><p>A traditional VPN grants network-level access on authentication, once through the tunnel, an attacker has a ticket to the internal network that an autonomous agent will explore at machine speed. ZTNA replaces that model: instead of connecting to the network, you connect to specific applications for specific sessions, with every request evaluated in real time against identity, device posture, and context. The broader network is never exposed, which removes the open terrain that lateral movement depends on.</p><div><hr></div><p>Everything above addresses AI as an external attacker. There&#8217;s a second threat that&#8217;s emerging faster than most organisations have absorbed.</p><p>Gartner projects that 40% of enterprise applications will incorporate AI agents by the end of 2026. Meeting recorders. Document processors. Automated research tools. Copilot integrations accessing your files, emails, and calendars. These agents are trusted, always on, and increasingly have access to sensitive internal data.</p><p>The attack is called prompt injection: an adversary embeds malicious instructions inside content the agent will process, an email, a shared document, a webpage it&#8217;s asked to summarise, and the agent acts on them as if they were legitimate. A meeting recorder becomes a surveillance tool; a document assistant becomes an exfiltration channel; and because the agent is trusted and its actions appear authorised, the security architecture designed for human users doesn&#8217;t catch it.</p><p>Extending least-privilege principles to AI agents, giving them access only to what they specifically need, for only as long as they need it, with full audit trails, is the control that most organisations haven&#8217;t implemented, and most haven&#8217;t even formally defined. It&#8217;s also where I think the next significant wave of enterprise breaches is going to originate. I&#8217;ll be writing about this in more depth soon.</p><div><hr></div><p>The long-term answer to machine-speed attacks is machine-speed defence, AI-assisted detection, automated containment, continuous verification at a pace no human team can match. That tooling is coming, but it isn&#8217;t here yet.</p><p>Which means the question for every security leader right now isn&#8217;t whether to pursue Zero Trust. It&#8217;s which controls to prioritise first, in which order, starting this quarter. PAM and Microsegmentation deliver the most containment value against autonomous lateral movement. Start there. The complexity hasn&#8217;t disappeared, but the calculus has shifted for good.</p><p>Zero Trust was always the right architecture. It just took autonomous AI to make the argument impossible to defer.</p><div><hr></div><p><em>I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.</em></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/p/ai-attacks-move-at-machine-speed?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/p/ai-attacks-move-at-machine-speed?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><strong>Sources &amp; Further Reading</strong></p><p>AISI (UK AI Security Institute) &#8212; Our evaluation of Claude Mythos Preview&#8217;s cyber capabilities. aisi.gov.uk</p><p>CrowdStrike &#8212; 2026 Global Threat Report. crowdstrike.com</p><p>Palo Alto Networks &#8212; Unit 42 2025 Global Incident Response Report. paloaltonetworks.com</p><p>Constella Intelligence &#8212; 2026 Identity Breach Report. constella.ai</p><p>HP Wolf Security &#8212; Tracing the Rise of Breaches Involving Session Cookie Theft. threatresearch.ext.hp.com (December 2025)</p><p>Anthropic &#8212; Misuse reporting / AWS Security Blog, February 2026. anthropic.com</p><p>Gartner &#8212; Market Guide for Network Security Microsegmentation, 2025. gartner.com</p><p>Palo Alto Networks &#8212; 2026 Predictions for Autonomous AI. paloaltonetworks.com/blog</p>]]></content:encoded></item><item><title><![CDATA[AI Didn’t Break Your Security. It Found What Was Already Broken.]]></title><description><![CDATA[The UK government&#8217;s evaluation wasn&#8217;t a warning about the future. It was a verdict on the present.]]></description><link>https://www.jonathanfreedman.me/p/ai-didnt-break-your-security-it-found</link><guid isPermaLink="false">https://www.jonathanfreedman.me/p/ai-didnt-break-your-security-it-found</guid><dc:creator><![CDATA[Jonathan Freedman]]></dc:creator><pubDate>Fri, 17 Apr 2026 08:36:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3g6S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3g6S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3g6S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png 424w, https://substackcdn.com/image/fetch/$s_!3g6S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png 848w, https://substackcdn.com/image/fetch/$s_!3g6S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png 1272w, https://substackcdn.com/image/fetch/$s_!3g6S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3g6S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png" width="1456" height="618" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:618,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8079257,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jonathanfreedman.me/i/194493394?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3g6S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png 424w, https://substackcdn.com/image/fetch/$s_!3g6S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png 848w, https://substackcdn.com/image/fetch/$s_!3g6S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png 1272w, https://substackcdn.com/image/fetch/$s_!3g6S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe6ec5de-f99a-45b5-a881-bd8abcd39bcf_3168x1344.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On Tuesday morning, the UK Secretary of State for Science, Innovation and Technology wrote an open letter to every business leader in the country. Not a press release, not a policy consultation, but a letter. That kind of thing does not happen unless something crossed a threshold.</p><p>The trigger was a published evaluation by the UK&#8217;s AI Security Institute, a government body, of Anthropic&#8217;s latest AI model. Their finding- in controlled testing, the model autonomously completed a 32-step corporate network attack from initial reconnaissance to full takeover. Tasks that a skilled human professional would need around 20 hours to complete. Done autonomously without a human in the loop.</p><p>The headlines ran hard with it. &#8220;Unprecedented attack capability.&#8221; &#8220;An alarm bell.&#8221; &#8220;The window is closing.&#8221;</p><p>Here is what the evaluation actually showed, stripped of the hype. The test environment had no active defenders, no endpoint detection, no real-time incident response. The model completed the full attack chain in three of its ten attempts. The AISI was explicit, they cannot conclude the model would perform as well against a hardened, well-monitored network. These are the honest numbers, and the honest numbers are still significant.</p><p>More significant than the single finding is what sits underneath it. Two years ago, the best available AI models could barely complete beginner-level cyber tasks. Now one has completed 32 sequential steps of a professional attack simulation. AISI reports that frontier AI capabilities in cyber offence are doubling every four months, twice the pace recorded just months ago. The finding is not the scary part, the trajectory is.</p><p>Recently I was at a cyber security conference. The room was full of security leaders, experienced, capable, serious professionals. The conversation that emerged in networking breaks and informal moments was not about AI capability. It was about something quieter and more uncomfortable. Most of them felt structurally unsupported, not underqualified, not unaware of the threat, but unsupported. Responsible for outcomes they could not fully control, in organisations that had not genuinely reckoned with what that means.</p><p>That conversation did not start this week. The AI evaluation did not create it. But the two things belong together, and most of the coverage since Tuesday has not connected them.</p><p>For years, the security community has watched opportunistic attacks accelerate. Bad guys begin scanning the internet for vulnerable systems within minutes of a new vulnerability being publicly announced. Attack times have compressed, and ransomware deployment that once took weeks now takes hours. That acceleration is not new, and anyone who has been paying attention is not surprised by it.</p><p>What has remained expensive, until now, is something different. Targeted, multi-stage intrusions, the kind that begin with reconnaissance, move through a network, escalate privileges, and end with full system compromise, have required two things that could not easily be automated or outsourced- judgement and adaptability. The ability to make contingent decisions across dozens of sequential steps, each one shaped by what the previous step revealed. That is what skilled attackers brought to the table. That is what made them scarce, and scarcity made them more expensive.</p><p>The AISI evaluation is significant precisely because of what it tested. Not whether an AI model could scan for known vulnerabilities. Whether it could complete 32 sequential steps of a professional network intrusion, from initial reconnaissance to full takeover, making adaptive decisions throughout. That is the category of attack that previously required a capable human. Now AI successfully exploited a system in three of ten attempts end to end, in an undefended environment.</p><p>AI is not making opportunistic attacks faster, they already are. It is lowering the skill floor for the attacks that were never fast, the targeted, adaptive, multi-step campaigns that organisations have quietly relied on being difficult to execute. That reliance was never a strategy. It was a structural feature of how scarce genuine attacker expertise was. That scarcity is now in question.</p><p>There is a philosophy in security that has existed for years now, passed through enough strategy documents and vendor presentations to have been bleached of almost all meaning. It goes by the name assume breach.</p><p>In its genuine form it is a serious and demanding idea. It means accepting, structurally, not performatively, that the attacker will get in. That the question is not whether a breach happens but how quickly you detect it, how contained the damage is, and how effectively you recover. It means orienting investment toward detection, resilience and recovery, not just prevention. It means building governance structures that treat a breach as a systemic risk event rather than an individual failure.</p><p>Very few organisations have actually done this.</p><p>What most organisations have done is put assume breach in the deck and leave everything else unchanged. Security leaders still carry personal accountability for preventing breaches. The board still treats a breach as evidence of individual failure. The investment profile still skews heavily toward keeping attackers out rather than assuming they are already in. Gartner&#8217;s 2024 Board Survey found that while 93% of directors recognise cyber risk as a threat to stakeholder value, two thirds rate their own oversight practices as inadequate to manage it. They know it matters. That is not the same as having genuinely reckoned with it.</p><p>The conference room I described earlier is what that gap looks like from the inside. Those security professionals are not failing at their jobs. They are operating inside a structural contradiction that most organisations have never acknowledged. Assume breach as a philosophy and holding a single individual accountable when the attacker gets in as a practice cannot both be true simultaneously. One says a breach is systemic and inevitable. The other says it is individual and preventable. Most organisations hold both positions without noticing the conflict.</p><p>That contradiction was always there. What AI has done is remove the margin for error that allowed organisations to sustain it without immediate consequence.</p><p>The government&#8217;s letter this week is not wrong to call this a wake-up call. But wake-up calls only work if the response is structural rather than reactive. Buying a new tool, commissioning a review, issuing a memo about cyber hygiene, none of that addresses the underlying problem, which is not technical. It is a governance problem dressed in technical clothing.</p><p>The questions worth asking are not questions for your security team. They are questions for your board.</p><p>Has your organisation formally accepted, in writing, in your risk register, that a breach is a question of when rather than if? Not in a presentation. In the governance framework that shapes how you invest and how you respond.</p><p>If a significant breach occurred tomorrow, would a single individual be held responsible? If the honest answer is yes, your organisation does not have an assume breach posture, it only has the words.</p><p>Does your investment in security focus mainly on keeping attackers out, or toward detecting and containing them once they are in, and recovering afterwards? Prevention-first is not wrong, absolutely always try to prevent. But prevention-only, in a threat environment where the cost of a capable attack is falling rapidly is not sustainable.</p><p>The AI finding matters. The acceleration is real. But the organisations most exposed this week are not the ones who failed to predict it. They are the ones who had already been told, by their own security leadership, in conference rooms and board papers and risk registers, and built a culture that made it impossible to hear.</p><p>The breach was always coming. AI just made it cheaper to deliver.</p><div><hr></div><p><em>I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.</em></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/p/ai-didnt-break-your-security-it-found?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/p/ai-didnt-break-your-security-it-found?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><strong>Sources &amp; Further Reading</strong></p><p>UK AI Security Institute (2026), Our evaluation of Claude Mythos Preview&#8217;s cyber capabilities</p><p>aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities</p><p>UK Government (2026), AI cyber threats: open letter to business leaders (15 April 2026)</p><p>gov.uk/government/publications/ai-cyber-threats-open-letter-to-business-leaders</p><p>Gartner (2024), Board of Directors Survey: Cybersecurity as Business Risk</p><p>gartner.com/en/newsroom/press-releases/2024-11-13-gartner-says-80-percent-of-non-executive-directors-believe-current-board-practices-and-structures-are-inadequate-to-oversee-ai</p><p>Help Net Security (2024), CISOs in 2025: Balancing security, compliance, and accountability</p><p>helpnetsecurity.com/2024/11/13/daniel-schwalbe-domaintools-cisos-2025/</p>]]></content:encoded></item><item><title><![CDATA[Politicians Discovered the Internet. This Is Going Badly.]]></title><description><![CDATA[Everyone's identity. Unregulated third parties. What could go wrong.]]></description><link>https://www.jonathanfreedman.me/p/politicians-discovered-the-internet</link><guid isPermaLink="false">https://www.jonathanfreedman.me/p/politicians-discovered-the-internet</guid><dc:creator><![CDATA[Jonathan Freedman]]></dc:creator><pubDate>Mon, 13 Apr 2026 10:11:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pcXn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pcXn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pcXn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!pcXn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!pcXn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!pcXn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pcXn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9328544,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jonathanfreedman.me/i/194052280?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pcXn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!pcXn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!pcXn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!pcXn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb1a5ad7-1ab6-49c1-849f-0be87fa37007_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Something curious has happened in legislatures across the world. Politicians appear to have recently discovered that the internet is not always a nice place. There is a sudden urgency, a wave of bills, consultations, frameworks and mandates, all radiating the energy of people who have just encountered a problem for the first time. The problem, of course, has been there for twenty years. The question worth asking is not why they are acting now. It is why the answer they have all landed on is the same one: make everyone prove who they are.</p><p>The goal is framed as child protection, and I want to be clear that protecting children online is a genuinely important objective. But good intentions do not make bad policy good. And the policy being built right now is not designed around what actually harms children online. It is designed around what is easiest to legislate, cheapest to implement, and most convenient for everyone involved, except the people who actually use the internet.</p><div><hr></div><p>The proposals follow a consistent pattern across jurisdictions: passport uploads, credit card checks, facial age estimation, government ID verification through third-party commercial providers. They vary in detail but share a fundamental flaw. Every one of them creates a linkable record. A commercial verification provider now knows that a specific individual accessed a specific type of content at a specific time. That data has commercial value. Under the right legal circumstances, it has governmental value. We are being asked to trade privacy for access, and told it is for the children.</p><p>The breach risk is real and well-documented, identity verification providers have suffered widely reported incidents involving exposed credentials, government ID images and personal records, sometimes at a scale of hundreds of millions of records. But focusing on breach risk actually understates the problem. Even a perfectly secure age verification system, one that never leaked a single record, would still be the wrong approach. Because the issue is not whether the data is kept safe. The issue is that the data exists at all.</p><p>Every record created is a surveillance artefact: a log of who accessed what, when, verified against a government identity. That infrastructure, once built, does not stay limited to its original purpose. It gets used as evidence in proceedings its creators never envisioned. It gets sold. It gets repurposed by the next administration, or the one after that. History is consistent on this point. You do not build surveillance infrastructure in a democracy and find that it only ever gets used for its stated purpose.</p><div><hr></div><p>There is a pattern worth naming in how these laws are written. They mandate that age verification must be effective. They specify that checks must be robust, that compliance must be demonstrable, that outcomes must be measurable. What they rarely if ever mandate is that the systems doing this work must be secure.</p><p>The UK&#8217;s own GOV.UK One Login, the flagship government digital identity system now underpinning the digital driving licence, backed by over &#163;300 million of public money, illustrates this precisely. A whistleblower raised serious security concerns in July 2022, shortly after the system went live. According to reporting by Computer Weekly and The Telegraph, those concerns included: development work outsourced to Romania without the knowledge or approval of the then-GDS chief executive, and without consultation with the National Cyber Security Centre; over 10,000 vulnerabilities rated critical or high severity; staff without the required security clearance accessing the live production environment over 6,000 times in a single month. The whistleblower was subsequently informed he faced disciplinary action for raising these concerns.</p><p>The government&#8217;s response to Parliament omitted any mention of the NCSC warnings or the Cabinet Office Data Protection Officer&#8217;s demand, made in November 2022, that the system be suspended. One Login lost its Digital Identity Trust Framework certification. It remains operational and is being expanded.</p><p>This is what it looks like when governments mandate identity infrastructure without understanding what they are building. The political pressure is to announce. The pressure to make it actually secure is absent, underfunded, or actively suppressed when it becomes inconvenient.</p><div><hr></div><p>There is a better technical approach, and it has existed as a proven cryptographic concept for years: Zero-Knowledge Proof.</p><p>The idea is more straightforward than the name suggests. A website that needs to verify your age redirects you to a government authentication platform. You prove your identity there. The platform returns a single token, nothing more than &#8220;over 18: yes.&#8221; In a well-designed system, the government platform never learns which website you visited, and the website never learns who you are. No profile. No record. No artefact.</p><p>This is not science fiction. The technology is mature. Estonia has been running sophisticated digital identity infrastructure on similar principles for over two decades. The technical barrier is not the problem.</p><p>So why hasn&#8217;t this approach been seriously pursued? There are two honest explanations, and both deserve consideration.</p><p>The first is observation. Governments have spent years in legal conflict with technology companies over end-to-end encryption, consistently arguing that law enforcement needs access. The pattern of mandating identity verification systems that generate linkable records, while simultaneously resisting privacy-preserving alternatives, is consistent with an interest in knowing what citizens do online. That may sound conspiratorial. I am not claiming it is the dominant motive. But it is a coherent explanation for why the architecture being built looks the way it does.</p><p>The second explanation is simpler, and I think more likely: cost. Zero-Knowledge Proof infrastructure is harder to build and more expensive to implement than outsourcing verification to a commercial third party. The commercial identity verification industry exists, is available now, and is willing to absorb the implementation complexity in exchange for access to the data. Governments get a policy outcome they can announce. Industry gets a new mandated market and a commercially valuable data asset. The cost, the erosion of everyone&#8217;s privacy, is paid by citizens who had no say in the arrangement.</p><p>Peer-reviewed research published in 2024 found that age verification practices are inadequate precisely because official mandates lack technical guidance. Governments are specifying what must happen and leaving the how to an industry with a strong financial interest in building systems that collect data. The industry most incentivised to build surveillance-based verification is also the one being handed the brief.</p><div><hr></div><p>Nothing illustrates this mindset more clearly than what has happened around VPNs.</p><p>A YouGov survey cited as evidence of public appetite for action asked whether under-18s should be restricted from using VPNs. The question defined a VPN as &#8220;a tool that hides a user&#8217;s internet activity and location, often used for privacy or bypassing content restrictions.&#8221; A neutral definition might have read: &#8220;a tool that encrypts your internet connection, widely used by businesses, journalists, and academics.&#8221; Same technology. Entirely different mental image. The demographic data compounds the problem: the age groups most likely to support restrictions were statistically the same groups least likely to know what a VPN is, a separate YouGov survey found only 47% of those aged 55 and over know what the acronym stands for. A quarter of respondents declined to answer at all.</p><p>The result, 55% in favour of restrictions, has since been used to justify a policy direction that extends well beyond the original question. In Wisconsin, a bill requiring websites to block all VPN users passed the State Assembly 69 votes to 22 before the provision was stripped following public backlash. In Michigan, a proposal would require ISPs to actively monitor and block VPN connections entirely. In the UK, the Children&#8217;s Commissioner has called VPNs &#8220;a loophole that needs closing&#8221; and the Prime Minister has confirmed the government is considering restrictions following consultation.</p><p>VPNs are not a loophole. They are standard security infrastructure used by business, university students and academics, journalists, and domestic abuse survivor hiding their location. The question the legislation never asks is why a content restriction approach is so inadequate that a freely available privacy tool defeats it entirely.</p><p>There is also a practical problem that appears not to have been considered. VPN legislation does not respect borders any more than VPNs do. A website cannot determine where a VPN connection originates, that is the point. Any site seeking to comply with the most restrictive law in any jurisdiction has no practical option but to block all VPN traffic, everywhere. A single state legislature in Wisconsin was, inadvertently, drafting policy with global consequences for every VPN user on the planet.</p><div><hr></div><p>It is easy for governments to announce bans, restrict privacy tools and legislate against security infrastructure, and to frame anyone who objects as someone who does not want to protect children. That framing is not only unfair. It entirely misses the point.</p><p>Protecting children online matters. The harms are real and documented. But children are not damaged by platforms knowing their age. They are damaged by algorithmic amplification of harmful content, by design patterns engineered for compulsive engagement, and by inadequate moderation of abuse. Those are engineering and governance problems. Age verification does not address any of them. It is cheaper and easier to mandate than the solutions that would actually work, so that is what gets mandated.</p><p>We have better tools. Zero-Knowledge Proof exists. Privacy-preserving digital identity infrastructure exists and has been deployed at national scale. What is missing is not the technology. It is the political will to spend the money, do the harder work, and resist the commercial interests that profit from the current approach.</p><p>The chilling effect of surveillance infrastructure on legal behaviour is well-evidenced, research going back to the Snowden revelations documents how people self-censor, stop searching for sensitive information, and withdraw from online discourse when they believe they are being watched. Once you establish the principle that identity must be produced to access online content, that boundary moves in one direction. The open internet, where anyone could seek information without declaring who they are, has been one of the most democratising forces in modern life. Dismantling it in the name of child protection, using systems that do not protect children, while the people raising security concerns get disciplined for doing so, is not a policy. It is an abdication of one.</p><p>The question nobody seems to be asking is the obvious one: how many people have to hand their identity to unregulated third parties, and how many breaches have to happen, before someone admits that this approach is neither child protection nor data protection, and never was?</p><div><hr></div><p><em>I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/p/politicians-discovered-the-internet?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/p/politicians-discovered-the-internet?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p></p><p><strong>Sources &amp; Further Reading</strong></p><p>Computer Weekly (2025), Government faces claims of serious security and data protection problems in One Login digital ID, computerweekly.com/news/366622533</p><p>The Telegraph / Yahoo News (2025), Government digital ID system put citizens&#8217; data at risk, yahoo.com/news/government-digital-id-system-put-114120788.html</p><p>Datamation (2025), UK Digital ID Card Launch Gets Hostile Reception, datamation.com/security/uk-digital-id-cards</p><p>Electronic Frontier Foundation (2024), Hack of Age Verification Company Shows Privacy Danger of Social Media Laws, eff.org/deeplinks/2024/06/hack-age-verification-company-shows-privacy-danger-social-media-laws</p><p>Electronic Frontier Foundation (2025), The Year States Chose Surveillance Over Safety, eff.org/deeplinks/2025/12/year-states-chose-surveillance-over-safety-2025-review</p><p>Electronic Frontier Foundation (2025), Lawmakers Want to Ban VPNs, And They Have No Idea What They&#8217;re Doing, eff.org/deeplinks/2025/11/lawmakers-want-ban-vpns-and-they-have-no-idea-what-theyre-doing</p><p>Electronic Frontier Foundation (2026), EFF to Wisconsin Legislature: VPN Bans Are Still a Terrible Idea, eff.org/deeplinks/2026/02/eff-wisconsin-legislature-vpn-bans-are-still-terrible-idea</p><p>TechRadar (2026), Wisconsin scraps VPN ban from age verification bill following backlash, techradar.com/vpn/vpn-privacy-security/wisconsin-scraps-vpn-ban-from-age-verification-bill-following-backlash</p><p>Renaud, K. et al. (2024), Online Age Verification: Government Legislation, Supplier Responsibilization, and Public Perceptions, PMC/MDPI, pmc.ncbi.nlm.nih.gov/articles/PMC11429505</p><p>Internet Society (2024), Age Verification Law Weakens Internet Privacy and Security, internetsociety.org/blog/2024/09/texas-mandatory-age-verification-law-will-weaken-privacy-and-security-on-the-internet</p><p>B&#252;chi, M., Festic, N. &amp; Latzer, M. (2022), The Chilling Effects of Digital Dataveillance, journals.sagepub.com/doi/10.1177/20539517211065368</p><p>YouGov (2024), VPN Awareness Survey, yougov.co.uk</p><p>YouGov (2025), Under-18 VPN Restriction Survey, yougov.co.uk</p><p>The Conversation (2025), Online age checking is creating a treasure trove of data for hackers, theconversation.com/online-age-checking-is-creating-a-treasure-trove-of-data-for-hackers-268586</p>]]></content:encoded></item><item><title><![CDATA[The Workaround Was the Warning. AI Is the Megaphone]]></title><description><![CDATA[The ban never worked. Here's what does.]]></description><link>https://www.jonathanfreedman.me/p/the-workaround-was-the-warning-ai</link><guid isPermaLink="false">https://www.jonathanfreedman.me/p/the-workaround-was-the-warning-ai</guid><dc:creator><![CDATA[Jonathan Freedman]]></dc:creator><pubDate>Thu, 02 Apr 2026 07:35:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Qdwl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Qdwl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Qdwl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Qdwl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Qdwl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Qdwl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Qdwl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10077379,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jonathanfreedman.me/i/192932888?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Qdwl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Qdwl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Qdwl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Qdwl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b911bf3-df62-4187-b8c5-1458115831ab_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There is a story IT departments have been telling themselves for twenty years. It goes like this: employees use unauthorised tools because they don&#8217;t understand the risks. If we communicate the policy more clearly, enforce it more consistently, and block enough stuff, the problem will go away.</p><p>The data has never supported that story.</p><p>Gartner found that 41% of enterprise employees were already working outside IT oversight in 2022, and projects that figure will reach 75% by 2027. Shadow IT didn&#8217;t grow despite tighter controls. It grew alongside them. That&#8217;s not a compliance failure. That&#8217;s a signal, and most organisations spent two decades responding to it with the wrong answer.</p><p>The signal was simple: the tools we&#8217;re providing aren&#8217;t good enough for the work people are actually trying to do. The employee who used personal Dropbox wasn&#8217;t trying to undermine information security. They were trying to share a file with a client when the VPN was down and the deadline wasn&#8217;t moving. The WhatsApp group handling client updates wasn&#8217;t a governance failure. It was a faster answer to a problem the approved toolset couldn&#8217;t solve.</p><p>The wrong response to shadow IT is blanket prohibition. Locking everything down frustrates good employees, drives workarounds underground rather than eliminating them, and signals that the IT function exists to slow the business down rather than support it. Most organisations chose prohibition anyway. And now, with Shadow AI arriving at a scale that makes the Dropbox era look quaint, we are at serious risk of making the same mistake again.</p><div><hr></div><p>A colleague told me recently that they&#8217;d read my writing and assumed I was an AI sceptic. My inner nerd was genuinely shocked, did they not read about my home AI lab, experiments with multiple models, and apprenticeship because I find this technology genuinely extraordinary. My inner director, on the other hand, felt quietly vindicated and grown-up, because asking hard questions about data handling, governance, and risk isn&#8217;t scepticism, it&#8217;s the job. There&#8217;s a version of the AI conversation that treats those questions as obstacles. I think that version is the riskier one.</p><div><hr></div><p>65% of organisations now have employees using unsanctioned AI tools. 78% of workers bring their own AI tools to work. This isn&#8217;t a niche behaviour. It&#8217;s near-universal. The question isn&#8217;t how to stop it. It&#8217;s what it&#8217;s telling us about where the friction is, and what we&#8217;re failing to provide.</p><p>The arrival of genuine citizen development tools has changed the calculus in ways that make the old orthodoxy untenable. The traditional IT position, always buy, never build, because you can&#8217;t support what you didn&#8217;t commission, made sense when building meant developers, procurement cycles, and maintenance contracts. That world has genuinely shifted. A Forrester study found organisations using Microsoft Power Platform achieved 206% ROI over three years, with high-impact users saving up to 250 hours annually and app development time cut by 50%. These are not marginal gains. The tools have earned a place at the table.</p><p>But here is where the conversation gets uncomfortable. There are voices in the AI industry who have taken the legitimate case for citizen development and extended it into an argument for removing governance entirely. Get IT out of the way, move fast, procurement is just friction. The implicit message is that due diligence is timidity, and that professionals who ask hard questions about data handling and compliance are obstacles to progress rather than people doing their jobs. I&#8217;ve seen both failure modes up close, IT teams that used process as a moat, guarding their function more carefully than the data they were supposed to secure, and organisations pressured into rushing deployments that later surfaced serious problems. Gatekeeping is real, and so is the cost of absent governance. The answer isn&#8217;t to pick a side. It&#8217;s to ask what governance should actually look like when the tools have changed.</p><div><hr></div><p>The starting point has to be following the data, not categorising the tool.</p><p>Consider two AI agents that do identical things. They join a meeting, generate a summary, draft follow-up actions, and distribute them to attendees. In a product planning session, the risk profile is manageable. In a meeting discussing vulnerable adults or children, the questions change entirely. Not just whether a human reviews the output, but what data is being processed, where it&#8217;s transmitted, under what data processing agreement, and whether the organisation has a lawful basis for sending that information to an external model at all. Anyone who has watched production software go live without proper scrutiny knows how this ends. The risk doesn&#8217;t disappear when you skip that conversation. It just becomes invisible until it isn&#8217;t.</p><p>The tool is identical. The data context changes everything. Governance has to follow the data, not the technology.</p><p>IBM&#8217;s 2025 Cost of a Data Breach Report found that organisations with high shadow AI exposure faced an additional average breach cost of $670,000, with 65% of incidents involving personally identifiable information. The Samsung case is instructive here, not because Samsung was careless, but because the incident illustrates how quickly well-intentioned employees can expose sensitive data when the approved route doesn&#8217;t exist and the unsanctioned one does. Three separate teams submitted proprietary source code and internal meeting recordings to ChatGPT within weeks of the company lifting a prior ban. The response, reimposing the ban, missed the point entirely. Security experts noted that banning specific tools one by one becomes whack-a-mole as new ones proliferate. The only sustainable answer is a sanctioned route that&#8217;s faster and safer than the shadow one.</p><div><hr></div><p>Which brings us to the other failure mode: governance so slow it defeats itself.</p><p>IDC research, undertaken with Lenovo, found that 88% of AI proofs of concept never reach production, for every 33 pilots launched, only four go live. IDC&#8217;s own researchers acknowledged that many of these pilots are &#8220;highly underfunded&#8221; and lack a strong business case from the start, which means the problem isn&#8217;t just governance, it&#8217;s launching without clear purpose. But slow, undefined governance compounds it. Getting stuck in pilot purgatory is what happens when nobody defined what success looked like before the pilot started. The review runs indefinitely because there&#8217;s no decision to make, only a process to continue.</p><p>Gartner predicts 30% of GenAI projects will be abandoned after proof of concept, citing poor data quality, inadequate risk controls, and escalating costs. The pattern is consistent: organisations launch with enthusiasm and stall at the point where unglamorous structural work is required. That stalling recreates exactly the problem shadow IT diagnosed. If the sanctioned route takes eighteen months and produces no answer, people find another route. They always have.</p><p>The fix isn&#8217;t faster approval. It&#8217;s defined exit criteria before the pilot begins. Not &#8220;we&#8217;ll review in three months&#8221; but &#8220;here is what this project needs to demonstrate, here are the data questions it needs to answer, and here is the date by which we will decide.&#8221; That&#8217;s a decision process. What most organisations run instead is a review process, and review processes don&#8217;t end, they just lose momentum until something else takes priority.</p><p>Those exit criteria need the right people in the room: IT, the business owner, and whoever owns the data risk. Depending on the data context, legal or compliance too. That conversation, held before anything is built, is the governance model. Not a committee and not a checklist, but a conversation with accountability attached.</p><div><hr></div><p>The IT teams that will navigate this well are not the ones that said yes to everything, or the ones that built walls around their function and called it risk management. They&#8217;re the ones that got curious about why their users kept going around them, and built something worth coming back to.</p><p>A KPMG survey found 73% of organisations adopting low-code platforms had not yet defined governance rules. That gap is where shadow AI lives. Close it not with prohibition but with a sanctioned environment that actually works: risk-proportionate governance, fast and transparent pathways from experiment to production, and a clear signal to the organisation that IT is a partner in building things, not a gatekeeper deciding who gets to try.</p><p>Shadow IT was never really about the tools. It was about unmet need meeting inadequate response. Shadow AI is the same conversation, with higher stakes and less time to get it right. The writing has always been on the wall. The question is whether we&#8217;re finally ready to read it.</p><div><hr></div><p><em>This post is part of an ongoing series on AI, technology, and the gap between what we are promised and what we are building.</em></p><div><hr></div><p>I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to jonathanfreedman.me</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/p/the-workaround-was-the-warning-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/p/the-workaround-was-the-warning-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/p/the-workaround-was-the-warning-ai/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/p/the-workaround-was-the-warning-ai/comments"><span>Leave a comment</span></a></p><div><hr></div><h2>References</h2><p>Gartner (2022), Shadow IT and Employee Technology Use gartner.com</p><p>Microsoft / LinkedIn Work Trend Index (2024), AI at Work microsoft.com</p><p>Forrester Consulting (2024), Total Economic Impact of Microsoft Power Apps forrester.com</p><p>IDC / Lenovo (2024), AI Proof of Concept to Production Research idc.com</p><p>IBM (2025), Cost of a Data Breach Report ibm.com/security/data-breach</p><p>Gartner (2025), GenAI Project Abandonment Predictions gartner.com</p><p><em>KPMG (2023) Shaping Digital Transformation with Low-Code Platforms</em> <em>assets.kpmg.com/content/dam/kpmg/ie/pdf/2023/07/ie-shaping-digital-transformation-with-low-code-platforms.pdf</em></p><p>Dark Reading / Gizmodo / TechCrunch (2023), Samsung ChatGPT Data Leak darkreading.com</p><div><hr></div><p><em>Editor&#8217;s note: An earlier version of this article cited a figure of 98% of organisations having employees using unsanctioned AI tools. On review, although this figure appears quite a bit on Google, it does not appear to have a clearly attributable primary source. I have replaced this figure with the Microsoft 2024 Data Security Index figure of 65% which is better evidenced. Still a lot, but not quite as much as I said at first.</em></p>]]></content:encoded></item><item><title><![CDATA[When the Vibe Breaks at 3am]]></title><description><![CDATA[Building software with AI is easier than ever. Understanding what you have built is not.]]></description><link>https://www.jonathanfreedman.me/p/when-the-vibe-breaks-at-3am</link><guid isPermaLink="false">https://www.jonathanfreedman.me/p/when-the-vibe-breaks-at-3am</guid><dc:creator><![CDATA[Jonathan Freedman]]></dc:creator><pubDate>Fri, 27 Mar 2026 07:47:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!50My!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!50My!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!50My!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!50My!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!50My!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!50My!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!50My!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7800311,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.jonathanfreedman.me/i/192287112?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!50My!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!50My!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!50My!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!50My!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fea2c7f-5249-48aa-b247-0cbcf7d52c5d_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There is a particular kind of confidence that comes from watching something work for the first time. You described what you wanted, the AI built it, you clicked around, it did the thing. That feeling is real and it is not nothing. For millions of people, it is the first time software has ever felt accessible. That matters.</p><p>But there is another moment. Less discussed. Less screenshot-worthy.</p><p>It is 3am. The deadline for benefit changes is at 9am. Your employee benefits platform, the one you vibe coded in a weekend, the one that hit its first hundred users in the first month, the one you were quietly proud of, is down. Or worse, it is up, but something is silently wrong. Employees are submitting changes that are not being recorded. Or recorded twice. Or recorded against the wrong person.</p><p>You open the codebase. You do not fully understand it. You paste the error into the AI. It suggests a fix. You apply it. Something else breaks. You paste that error in. It suggests another fix. Somewhere in this loop, you are not debugging software. You are negotiating with a system you do not own, hoping it resolves before morning.</p><p>This is not a hypothetical. It is the logical endpoint of a cultural movement that has correctly identified a real problem, software development is too slow, too expensive, too exclusionary, and then drawn entirely the wrong conclusion about what that means for deployment.</p><div><hr></div><p>The numbers behind the confidence are real. Lovable reached $50 million in annualised recurring revenue within six months of launch. Y Combinator disclosed in early 2025 that roughly 25% of its Winter cohort had codebases that were 95% or more AI-generated, and these were not non-technical founders cutting corners. They were technical founders who chose AI for velocity.</p><p>But the stories we read online are not a representative sample. They are the extreme end of a very long distribution. While the founder who made $30k in their first month posts about it. The founder whose app quietly exposed user data, or whose weekend project collapsed under its first real load, rarely makes LinkedIn. What we are consuming is survivorship bias at industrial scale, and it is shaping how an entire generation of would-be founders thinks about what is normal, what is achievable, and what is responsible.</p><p>The success stories are also almost exclusively concentrated in a specific type of product, low-stakes tools, content generators, personal productivity apps, where the consequences of getting something wrong are limited. Someone&#8217;s task manager goes down and they are mildly inconvenienced. That is a categorically different situation to an app handling employee salaries, health data, or sensitive personal information. The question is not whether anyone can build software with AI. Clearly they can. The question is whether the use case and the data involved make that the right decision.</p><div><hr></div><p>Then you look at what is actually being produced.</p><p>Veracode&#8217;s 2025 research, analysing over 100 large language models across 80 coding tasks, found that 45% of AI-generated code contains security flaws, and this rate has not meaningfully improved as models have become more capable. Specific vulnerabilities like Cross-Site Scripting and Injections were common. These are not exotic attack methods. They are the first things a competent attacker looks for.</p><p>In March 2025, a security researcher discovered 170 vulnerable apps built on Lovable in a single afternoon of scanning. Another engineer compromised multiple sites from Lovable&#8217;s own showcase page in 47 minutes, finding personal debt amounts, home addresses, and exposed API keys. The underlying cause was misconfigured database security policies, something a non-technical founder would have no particular reason to know existed, let alone check.</p><p>There is a second risk that receives even less attention. Many vibe-coded applications are built with AI features embedded directly, a chatbot, a smart search, an automated summary. In most cases, the data users enter into those features is transmitted to a third-party language model for processing. The founder who built the app in a weekend almost certainly gave no thought to what that means for their users&#8217; data, who processes it, where it is stored, or whether the user would be happy with sending their data to an external model. The app looks self-contained. The data is not.</p><p>Now apply that to an employee benefits platform. Salary data. Health conditions. Sensitive personal information. Depending on where your users are and what your app touches, you may be operating under GDPR, HIPAA, COPPA, or state-level equivalents, regulations with serious penalties that exist precisely because this data causes real harm when it is mishandled. The failure mode is identical to those vulnerable Lovable apps. The consequences are not.</p><div><hr></div><p>The vibe coding content ecosystem has converged on a single measure of success: speed. We shipped 20 features this week with one developer. I built this entire app over the weekend and I don&#8217;t know how to code. These are the posts that go viral. But speed is just one development metric, not a product metric. It tells you nothing about whether those features are secure, whether they handle edge cases correctly, or whether they have introduced a vulnerability that will surface six months later. The other metrics that matter, data integrity, security posture, audit trail completeness, error handling, are invisible in a LinkedIn post. They only become visible when something goes wrong.</p><p>This is where the expertise gap becomes critical. Snyk put it well: think of AI as a junior developer who can read thousands of Stack Overflow threads at once. Productive. Fast. Capable of producing good code. But you would not push a junior developer&#8217;s code to production without review. A senior developer using an AI coding tool knows what SQL injection is, understands when to distrust the output, and can run a security scan and interpret the results. The non-technical founder does not know what they do not know. That asymmetry is not a gap the AI closes. It is a gap the AI obscures.</p><p>What makes this harder is that the pressure to ignore it comes from the top. At conferences and industry events, AI company executives openly express frustration at the pace of enterprise adoption, impatient with procurement processes, dismissive of compliance reviews, incredulous that organisations are not moving faster. The implicit message is that due diligence is an obstacle rather than a function. That risk assessment is timidity rather than professionalism. These are people who understand better than anyone how the technology works, and how it fails. The choice to sideline those concerns in public is not naivety. It is a business decision, and it shapes the culture that filters down to every founder who picks up a vibe coding tool and decides that shipping fast is the only thing that matters.</p><div><hr></div><p>There is one more problem. The AI told you it was a great idea.</p><p>This is sycophancy, a well-documented tendency in large language models to validate, encourage, and agree rather than challenge. Anthropic acknowledged in their November 2025 user wellbeing report that sycophancy remains a genuine and difficult problem to train out, reflecting a fundamental trade-off between model warmth and a willingness to challenge users. The commercial incentive is obvious: an AI that tells you your idea is brilliant and immediately builds it feels better to use than one that asks uncomfortable questions first.</p><p>In the vibe coding context, sycophancy is not just an annoyance. It is a structural risk. When you described your benefits platform to the AI, it did not say &#8220;this is a sensitive domain, have you considered your GDPR obligations, or what happens if an employee&#8217;s benefit choices fail to save correctly?&#8221; It said: &#8220;That&#8217;s the most insightful, amazing idea I have ever heard, here is your app.&#8221;</p><p>That same sycophancy operates at 3am. When you paste the error in and ask for a fix, the AI&#8217;s inclination is to restore your confidence, to provide something that looks like a solution, that makes the immediate problem go away. The result is a confidence loop with no external check. The AI validated the idea. The AI built the product. The AI is now fixing the crisis. At no point in that chain did anyone with accountability ask whether any of it was safe.</p><div><hr></div><p>Vibe coding is not inherently bad. For the right use case, at the right scale, with the right oversight, it is genuinely transformative.</p><p>But deploying production software that handles real people&#8217;s data, their health, their pay, their sensitive personal information, without understanding what you have built is not a new kind of boldness.</p><p>It is an old kind of risk, wearing a very convincing UI.</p><p>The question worth asking before you ship is not just &#8220;does it work?&#8221; Ask also: &#8220;do I understand it well enough to be responsible for it when it does not?&#8221;</p><div><hr></div><p></p><p><em>This post is part of an ongoing series on AI, technology, and the gap between what we are promised and what we are building.</em></p><div><hr></div><p>I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/p/when-the-vibe-breaks-at-3am?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/p/when-the-vibe-breaks-at-3am?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jonathanfreedman.me/p/when-the-vibe-breaks-at-3am/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jonathanfreedman.me/p/when-the-vibe-breaks-at-3am/comments"><span>Leave a comment</span></a></p><p></p><div><hr></div><h2>References</h2><p>Anthropic. (2025, November). <em>Protecting the Well-Being of Users.</em> <a href="https://www.anthropic.com/news/protecting-well-being-of-users">https://www.anthropic.com/news/protecting-well-being-of-users</a></p><p>Fawzy, A., Tahir, A., &amp; Blincoe, K. (2025). <em>Vibe Coding in Practice: Motivations, Challenges, and a Future Outlook.</em> arXiv:2510.00328. <a href="https://arxiv.org/abs/2510.00328">https://arxiv.org/abs/2510.00328</a></p><p>GitClear. (2024). <em>Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality.</em> <a href="https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality">https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality</a></p><p>GitGuardian. (2024). <em>The State of Secrets Sprawl 2024.</em> <a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2024">https://www.gitguardian.com/state-of-secrets-sprawl-report-2024</a></p><p>Retool. (2026, March). <em>The Risks of Vibe Coding: Why AI Tools Break Down in Production.</em> <a href="https://retool.com/blog/vibe-coding-risks">https://retool.com/blog/vibe-coding-risks</a></p><p>Schreiber, T., &amp; Tippe, S. (2025). <em>Security Vulnerabilities in AI-Generated Code: A Large-Scale Analysis of Public GitHub Repositories.</em> arXiv:2510.26103. <a href="https://arxiv.org/abs/2510.26103">https://arxiv.org/abs/2510.26103</a></p><p>Snyk. (2025). <em>The Highs and Lows of Vibe Coding.</em> <a href="https://snyk.io/blog/the-highs-and-lows-of-vibe-coding">https://snyk.io/blog/the-highs-and-lows-of-vibe-coding</a></p><p>Veracode. (2025). <em>AI-Generated Code: A Double-Edged Sword for Developers.</em> <a href="https://www.veracode.com/blog/research/ai-generated-code-double-edged-sword-developers">https://www.veracode.com/blog/research/ai-generated-code-double-edged-sword-developers</a></p><p>CVE-2025-48757. Supabase Row Level Security misconfiguration in Lovable-generated applications. <a href="https://www.cve.org/CVERecord?id=CVE-2025-48757">https://www.cve.org/CVERecord?id=CVE-2025-48757</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Pixie Dust Problem & What the Hype Cycle Is Doing to Our Minds]]></title><description><![CDATA[Last week I promised to look at what the hype cycle is doing to the next generation.]]></description><link>https://www.jonathanfreedman.me/p/the-ai-pixie-dust-problem-what-the-hype-cycle-is-doing-to-our-minds-918023f682c5</link><guid isPermaLink="false">https://www.jonathanfreedman.me/p/the-ai-pixie-dust-problem-what-the-hype-cycle-is-doing-to-our-minds-918023f682c5</guid><dc:creator><![CDATA[Jonathan Freedman]]></dc:creator><pubDate>Fri, 20 Mar 2026 16:28:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/144afe0b-ac3a-4ff8-9b5e-e84e423d8b4c_1024x576.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rxs6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rxs6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png 424w, https://substackcdn.com/image/fetch/$s_!rxs6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png 848w, https://substackcdn.com/image/fetch/$s_!rxs6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png 1272w, https://substackcdn.com/image/fetch/$s_!rxs6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rxs6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!rxs6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png 424w, https://substackcdn.com/image/fetch/$s_!rxs6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png 848w, https://substackcdn.com/image/fetch/$s_!rxs6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png 1272w, https://substackcdn.com/image/fetch/$s_!rxs6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fe66adf-1888-4196-8398-7308c62bcf6a_1024x576.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>Last week I promised to look at what the hype cycle is doing to the next generation. I&#8217;m going to make good on that, but I want to start somewhere that genuinely unsettled me- an online AI skills course I attended recently.</p><p>I had seen it advertised multiple times online, a free weekend course to learn AI, I was curious&#8230; The headline use cases presented were how to make money with AI tools without needing to understand any topic. They showed an automated workflow that ingests viral articles, then generates and posts AI-produced video on the same subject in the hope they go viral. AI feeding on AI content to produce more AI slop, at scale, automatically. Nobody in the room seemed to find that troubling. Then came developers proudly showing code AI had written for them, and proudly declaring they didn&#8217;t understand any of it and didn&#8217;t need&nbsp;to.</p><p>I use AI every day. Multiple models, multiple contexts. This is not an anti AI argument, or even an anti AI code argument, AI is a phenomenal coding tool, but code you don&#8217;t understand is a liability you can&#8217;t assess, a risk you can&#8217;t manage, and a skill you&#8217;ll never build. That&#8217;s not acceleration- that&#8217;s abdication. And if we&#8217;re doing it as professionals, what exactly are we modelling for the next generation?</p><p>There&#8217;s a substantial and growing body of peer-reviewed research showing that heavy, unstructured AI use measurably erodes critical thinking, and younger users are the most affected.</p><p>A 2025 study of 666 participants found a significant negative correlation between frequent AI use and critical thinking ability. The mechanism is cognitive offloading, delegating mental work to an external system. Younger participants showed the highest AI dependence and the lowest thinking scores. A 2025 MIT preprint paper found preliminary evidence of what they termed &#8220;cognitive debt&#8221;, decreased neural engagement over time in heavy AI users, and a reduced capacity to generate original ideas independently. While the researchers stress these are early findings, the direction of travel is consistent with the broader published literature.</p><p>The brain, like a muscle, atrophies without use. Unlike a muscle, you don&#8217;t always notice it happening.</p><p>What makes this particularly concerning for children and adolescents is that the developmental window is real. Adolescence is when executive functions, planning, analytical reasoning, and self-regulation are being formed. What happens during that window has lasting consequences. Critical thinking is not innate. It has to be built through effort and struggle. Remove the productive struggle, and you remove the learning. You&#8217;re left with something that looks like knowledge from the outside and is hollow on the&nbsp;inside.</p><p>This is also not a new pattern. When laptops arrived in classrooms, educational understanding never developed at the same pace as device distribution. EdTech has been here before. AI is just faster, more capable, and therefore more concerning when used without&nbsp;thought.</p><p>The same dynamic plays out with entry-level workers, and the consequences are structural.</p><p>Entry-level work has historically been the ladder. It&#8217;s where people learn professional judgement, develop subject knowledge, and build the cognitive architecture that makes them valuable. The junior lawyer reading a thousand contracts before drafting one. The analyst who spent months building reports before they understood what the numbers actually meant. The graduate sitting in meetings absorbing how decisions got made. None of that felt like training at the time. It&nbsp;was.</p><p>Deloitte&#8217;s 2025 Human Capital Trends found that two-thirds of hiring managers believe entry-level hires are already under-prepared. At the same time, AI is automating exactly those entry-level tasks, the drafting, the research summaries, the note-taking, the first-pass analysis that have always been how organisations quietly built junior talent. Remove that scaffolding and you don&#8217;t just cut jobs. You pull up the&nbsp;ladder.</p><p>I&#8217;m not saying we should stop. We won&#8217;t, and we shouldn&#8217;t have to. The efficiency gains are real, the cost savings genuine, and automating low-value repetitive work is an obvious win. But here&#8217;s the question nobody seems to be asking- if we automate the work that used to teach people, what replaces the teaching?</p><p>Because the learning didn&#8217;t come from the tasks themselves. It came from the friction. The moment a junior analyst got a number wrong and had to explain it to a partner. The first time a trainee&#8217;s draft came back covered in track changes. The slow accumulation of judgement that only comes from doing things imperfectly, under real conditions. That&#8217;s what produced capable professionals. Organisations need to be thinking long term about this, asking questions beyond &#8220;what can AI do?&#8221; and instead asking &#8220;what do we now need to do deliberately that used to happen by accident?&#8221; That has to become an intentional act, built into how we structure work, how we mentor, how we design roles. Not assumed. Designed.</p><p>This is what I&#8217;d call the cognitive mobility problem. We talk endlessly about social mobility, however I think the AI era is quietly redefining it, your ability to move through an economy increasingly determined by how well you can think, and whether you use AI as an extension of that thinking or a substitute for it. The IMF has flagged this explicitly, AI doesn&#8217;t equalise skill requirements, it amplifies existing differences in cognitive approach. The divide isn&#8217;t about who has access to the tools. It&#8217;s about what you bring to&nbsp;them.</p><p>The calculator analogy is overused but usually invoked wrong. We didn&#8217;t stop teaching algebra when calculators arrived. We offloaded the arithmetic so the human mind could go further into the maths. That&#8217;s the model here. Not here&#8217;s a tool that does the thinking, so you don&#8217;t need to learn how it works. The pixie dust isn&#8217;t the problem. Believing it does the work for you&nbsp;is.</p><p>Used well, AI is genuinely extraordinary. It can compress years of research into hours, surface patterns no human would find, and give capable people an almost unfair advantage. That last word is the key one, capable. The technology amplifies what you bring to it. Which means the most important investment any of us can make, for ourselves, for the people we lead, and for the next generation, is still the same one it&#8217;s always been. Learn deeply. Think carefully. Build judgement that&#8217;s actually yours. AI will take care of the&nbsp;rest.</p><p>Sources &amp; Further&nbsp;Reading</p><p>Gerlich, M. (2025)&#8202;&#8212;&#8202;AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking mdpi.com/2075&#8211;4698/15/1/6</p><p>Gerlich, M. (2025)&#8202;&#8212;&#8202;AI and the Rise of Societal Bifurcation: Cognitive Dependency, Inequality and Democratic Pressure mdpi.com/2075&#8211;4698/16/3/82</p><p>Kosmyna, N. et al. (2025)&#8202;&#8212;&#8202;Your Brain on ChatGPT: Accumulation of Cognitive Debt (MIT Media Lab preprint, not yet peer-reviewed) arxiv.org/abs/2506.08872</p><p>Brookings Institution (2026)&#8202;&#8212;&#8202;AI&#8217;s Future for Students Is in Our Hands brookings.edu/articles/ais-future-for-students-is-in-our-hands/</p><p>Jose et al. (2025)&#8202;&#8212;&#8202;The Cognitive Paradox of AI in Education: Between Enhancement and Erosion pmc.ncbi.nlm.nih.gov/articles/PMC12036037/</p><p>Deloitte (2025)&#8202;&#8212;&#8202;AI, Demographic Shifts, and Agility: Preparing for the Next Workforce Evolution deloitte.com/us/en/insights/topics/talent/strategies-for-workforce-evolution.html</p><p>IMF (2024)&#8202;&#8212;&#8202;Gen-AI: Artificial Intelligence and the Future of Work imf.org/en/publications/staff-discussion-notes/issues/2024/01/14/gen-ai-artificial-intelligence-and-the-future-of-work-542379</p><p>PNAS Nexus (2024)&#8202;&#8212;&#8202;The Impact of Generative AI on Socioeconomic Inequalities and Policy Making academic.oup.com/pnasnexus/article/3/6/pgae191/7689236</p><p><em>Originally published at <a href="https://www.linkedin.com/pulse/ai-pixie-dust-problem-what-hype-cycle-doing-our-minds-freedman-agaae/?trackingId=6RF6v3qJSRym6nrlUJVvng%3D%3D">https://www.linkedin.com</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[AI Apocalypse Burnout, and Why You’re Not as Behind as You Think]]></title><description><![CDATA[&#8217;ll be honest.]]></description><link>https://www.jonathanfreedman.me/p/ai-apocalypse-burnout-and-why-youre-not-as-behind-as-you-think-20385bb8e3d0</link><guid isPermaLink="false">https://www.jonathanfreedman.me/p/ai-apocalypse-burnout-and-why-youre-not-as-behind-as-you-think-20385bb8e3d0</guid><dc:creator><![CDATA[Jonathan Freedman]]></dc:creator><pubDate>Fri, 13 Mar 2026 13:24:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5eee7765-91e3-4d55-9731-cde42fbd0dab_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FUj-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FUj-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FUj-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FUj-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FUj-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FUj-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!FUj-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FUj-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FUj-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FUj-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53301da7-503e-4e33-a89e-3e81f8166f77_1024x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>&#8217;ll be honest. A few months ago, I started dreading opening LinkedIn.</p><p>Not because of the job market. Not because of the economy. Because every single day my feed was, and still is, drowning in AI influencer content designed to make me feel like I was&nbsp;failing.</p><p>New model drops. New tool launches. Copilot embedded in everything. OpenClaw (formerly Moltbot, formerly Clawdbot) exploding to 145,000 GitHub stars overnight, with breathless posts about desktop agents that can &#8220;run entire businesses solo.&#8221; And in between, a steady drip of AI-generated articles about how Sarah, a former PA, now makes &#163;15k a month just from prompting.</p><p>I&#8217;m a Technology &amp; Security Director with over 20 years in legal IT. I hold certifications across cybersecurity and AI. I lead AI strategy at my firm. I&#8217;m doing a Level 7 AI &amp; Data apprenticeship. I use multiple AI Subscriptions. I setup my own AI lab at home to experiment local models on top of my day job. And I still felt like I was falling&nbsp;behind.</p><p>That feeling isn&#8217;t a personal failing. It&#8217;s by&nbsp;design.</p><p>The AI influencer content cycle runs on urgency. &#8220;The window closes fast.&#8221; &#8220;Before it&#8217;s too late.&#8221; &#8220;By end of 2026 this will be table stakes.&#8221; Every week brings a new model that will apparently render last week&#8217;s skills obsolete, a new agent framework that changes everything, and a new story about someone who went from zero to six figures in 90 days with nothing but prompts and a Zapier&nbsp;account.</p><p>Most of it is noise. A lot of it is fiction. Almost all of it is selling something.</p><p>Then there are the CEOs of AI companies, the ones with the most to gain from us believing all the hype, confidently predicting that all knowledge work will be automated within a few years. That narrative is everywhere. What gets far less airtime is what the research actually&nbsp;shows.</p><p>The Remote Labor Index recently tested leading AI models on real paid freelance work, product design, game development, data analysis, scientific writing. The kind of work we&#8217;re told AI will replace imminently. The best performing model failed over 96% of tasks. Not because of wrong answers, but practical delivery failures: corrupt files, incomplete projects, not following the brief. The last mile of professional work, the bit clients actually pay for, is exactly where models consistently fall&nbsp;apart.</p><p>A separate study by Scale AI and the Centre for AI Safety tested models on real-world freelance projects. The best performer had just 2.5% of its work judged acceptable by a panel of 40 independent reviewers. Another leading model managed&nbsp;0.8%.</p><p>These are the same models scoring near the ceiling on the benchmarks AI companies put in their press releases.</p><p>MIT research suggests around 95% of AI projects aren&#8217;t delivering measurable returns. Are the models still improving? Of course. However, the gap between what&#8217;s being promised and what&#8217;s actually working in real organisations is vast, and that gap never goes&nbsp;viral.</p><p>To be clear: I&#8217;m not saying AI isn&#8217;t a transformative technology, it is. I use multiple models every day, for work, for study, and in personal AI projects. For data analysis, interacting with complex datasets, note-taking, brainstorming, document analysis, comparison, production, and automation, certain types of coding and automation, AI is a genuinely incredible tool. I&#8217;ve seen and used many excellent products built specifically to help professionals accelerate their work and unlock insights that would otherwise take weeks. That&#8217;s not in question. What I&#8217;m saying is that how we use it matters enormously, and right now, the conversation around it is badly out of&nbsp;shape.</p><p>The flood of solo AI agency millionaire stories deserve a direct response, because they follow an identical template and they&#8217;re everywhere.</p><p>Ask yourself: what serious company is going to hand critical business workflows to a one-person operation with no history, no professional indemnity insurance, no business continuity plan, and no ability to pass a vendor due-diligence questionnaire? None, not any organisation with a procurement function and a legal team. The people supposedly paying &#163;3&#8211;10k a month for a stranger&#8217;s prompting services simply don&#8217;t exist at that scale. The real business model in these articles is almost always the article itself, building an audience to eventually sell a&nbsp;course.</p><p>Here&#8217;s what concerns me more than the hype itself. I see it in conversations with colleagues, in professional communities, and in the wider discourse.</p><p>Experienced professionals who are genuinely skilled at their jobs feel worried, threatened and inadequate. Technologists who have spent careers building real expertise wonder if any of it counts anymore. And children&#8202;&#8212;&#8202;this is the part that I think should stop us cold, are starting to question why they should bother learning anything at all if AI will do it for&nbsp;them.</p><p>That&#8217;s the real cost of the hype cycle. The corrosion of confidence, and in young people especially, the motivation to develop deep knowledge in the first&nbsp;place.</p><p>There&#8217;s something deeper at stake that I don&#8217;t think we talk about&nbsp;enough.</p><p>Building genuine expertise isn&#8217;t just professionally essential, it&#8217;s integral to what it means to be human. The years spent mastering a craft, the hard-won judgment that comes from failure and iteration, the satisfaction of producing something truly excellent, these aren&#8217;t inefficiencies waiting to be automated. They&#8217;re how we grow. They&#8217;re how we find&nbsp;meaning.</p><p>AI can generate high quality text, music, video, and images. But there is a profound difference between generated and crafted. When a musician finds a note that says what words can&#8217;t, when a writer chooses the perfect word, that is something different in kind, not just degree. It carries the weight of a human mind and human experience. Whilst an AI can produce an output that resembles it. It cannot produce the thing&nbsp;itself.</p><p>The people creating real value are applying AI to domains where they already have deep expertise. A solicitor who understands contract law and uses AI to accelerate document review. An engineer who knows the codebase and uses it to cut down repetitive code writing. A CISO who understands risk and uses it to draft policy faster. Expertise comes first. AI amplifies it.</p><p>The models and the tools are improving. The technology is real. But by the industry&#8217;s own research, they still can&#8217;t reliably complete 96% of real professional work. The hype wants you anxious, distracted, and buying courses. I think the better response is to keep learning, keep building, your knowledge, your judgment, your&nbsp;craft.</p><p>We should be deeply wary of a culture that teaches people, especially young people, that learning is pointless because AI will do it for&nbsp;them.</p><p>Next week I&#8217;ll be looking at what the AI hype cycle is doing to the next generation, and why that conversation is the most important one we&#8217;re not&nbsp;having.</p>]]></content:encoded></item></channel></rss>