When the Vibe Breaks at 3am
Building software with AI is easier than ever. Understanding what you have built is not.
There is a particular kind of confidence that comes from watching something work for the first time. You described what you wanted, the AI built it, you clicked around, it did the thing. That feeling is real and it is not nothing. For millions of people, it is the first time software has ever felt accessible. That matters.
But there is another moment. Less discussed. Less screenshot-worthy.
It is 3am. The deadline for benefit changes is at 9am. Your employee benefits platform, the one you vibe coded in a weekend, the one that hit its first hundred users in the first month, the one you were quietly proud of, is down. Or worse, it is up, but something is silently wrong. Employees are submitting changes that are not being recorded. Or recorded twice. Or recorded against the wrong person.
You open the codebase. You do not fully understand it. You paste the error into the AI. It suggests a fix. You apply it. Something else breaks. You paste that error in. It suggests another fix. Somewhere in this loop, you are not debugging software. You are negotiating with a system you do not own, hoping it resolves before morning.
This is not a hypothetical. It is the logical endpoint of a cultural movement that has correctly identified a real problem, software development is too slow, too expensive, too exclusionary, and then drawn entirely the wrong conclusion about what that means for deployment.
The numbers behind the confidence are real. Lovable reached $50 million in annualised recurring revenue within six months of launch. Y Combinator disclosed in early 2025 that roughly 25% of its Winter cohort had codebases that were 95% or more AI-generated, and these were not non-technical founders cutting corners. They were technical founders who chose AI for velocity.
But the stories we read online are not a representative sample. They are the extreme end of a very long distribution. While the founder who made $30k in their first month posts about it. The founder whose app quietly exposed user data, or whose weekend project collapsed under its first real load, rarely makes LinkedIn. What we are consuming is survivorship bias at industrial scale, and it is shaping how an entire generation of would-be founders thinks about what is normal, what is achievable, and what is responsible.
The success stories are also almost exclusively concentrated in a specific type of product, low-stakes tools, content generators, personal productivity apps, where the consequences of getting something wrong are limited. Someone’s task manager goes down and they are mildly inconvenienced. That is a categorically different situation to an app handling employee salaries, health data, or sensitive personal information. The question is not whether anyone can build software with AI. Clearly they can. The question is whether the use case and the data involved make that the right decision.
Then you look at what is actually being produced.
Veracode’s 2025 research, analysing over 100 large language models across 80 coding tasks, found that 45% of AI-generated code contains security flaws, and this rate has not meaningfully improved as models have become more capable. Specific vulnerabilities like Cross-Site Scripting and Injections were common. These are not exotic attack methods. They are the first things a competent attacker looks for.
In March 2025, a security researcher discovered 170 vulnerable apps built on Lovable in a single afternoon of scanning. Another engineer compromised multiple sites from Lovable’s own showcase page in 47 minutes, finding personal debt amounts, home addresses, and exposed API keys. The underlying cause was misconfigured database security policies, something a non-technical founder would have no particular reason to know existed, let alone check.
There is a second risk that receives even less attention. Many vibe-coded applications are built with AI features embedded directly, a chatbot, a smart search, an automated summary. In most cases, the data users enter into those features is transmitted to a third-party language model for processing. The founder who built the app in a weekend almost certainly gave no thought to what that means for their users’ data, who processes it, where it is stored, or whether the user would be happy with sending their data to an external model. The app looks self-contained. The data is not.
Now apply that to an employee benefits platform. Salary data. Health conditions. Sensitive personal information. Depending on where your users are and what your app touches, you may be operating under GDPR, HIPAA, COPPA, or state-level equivalents, regulations with serious penalties that exist precisely because this data causes real harm when it is mishandled. The failure mode is identical to those vulnerable Lovable apps. The consequences are not.
The vibe coding content ecosystem has converged on a single measure of success: speed. We shipped 20 features this week with one developer. I built this entire app over the weekend and I don’t know how to code. These are the posts that go viral. But speed is just one development metric, not a product metric. It tells you nothing about whether those features are secure, whether they handle edge cases correctly, or whether they have introduced a vulnerability that will surface six months later. The other metrics that matter, data integrity, security posture, audit trail completeness, error handling, are invisible in a LinkedIn post. They only become visible when something goes wrong.
This is where the expertise gap becomes critical. Snyk put it well: think of AI as a junior developer who can read thousands of Stack Overflow threads at once. Productive. Fast. Capable of producing good code. But you would not push a junior developer’s code to production without review. A senior developer using an AI coding tool knows what SQL injection is, understands when to distrust the output, and can run a security scan and interpret the results. The non-technical founder does not know what they do not know. That asymmetry is not a gap the AI closes. It is a gap the AI obscures.
What makes this harder is that the pressure to ignore it comes from the top. At conferences and industry events, AI company executives openly express frustration at the pace of enterprise adoption, impatient with procurement processes, dismissive of compliance reviews, incredulous that organisations are not moving faster. The implicit message is that due diligence is an obstacle rather than a function. That risk assessment is timidity rather than professionalism. These are people who understand better than anyone how the technology works, and how it fails. The choice to sideline those concerns in public is not naivety. It is a business decision, and it shapes the culture that filters down to every founder who picks up a vibe coding tool and decides that shipping fast is the only thing that matters.
There is one more problem. The AI told you it was a great idea.
This is sycophancy, a well-documented tendency in large language models to validate, encourage, and agree rather than challenge. Anthropic acknowledged in their November 2025 user wellbeing report that sycophancy remains a genuine and difficult problem to train out, reflecting a fundamental trade-off between model warmth and a willingness to challenge users. The commercial incentive is obvious: an AI that tells you your idea is brilliant and immediately builds it feels better to use than one that asks uncomfortable questions first.
In the vibe coding context, sycophancy is not just an annoyance. It is a structural risk. When you described your benefits platform to the AI, it did not say “this is a sensitive domain, have you considered your GDPR obligations, or what happens if an employee’s benefit choices fail to save correctly?” It said: “That’s the most insightful, amazing idea I have ever heard, here is your app.”
That same sycophancy operates at 3am. When you paste the error in and ask for a fix, the AI’s inclination is to restore your confidence, to provide something that looks like a solution, that makes the immediate problem go away. The result is a confidence loop with no external check. The AI validated the idea. The AI built the product. The AI is now fixing the crisis. At no point in that chain did anyone with accountability ask whether any of it was safe.
Vibe coding is not inherently bad. For the right use case, at the right scale, with the right oversight, it is genuinely transformative.
But deploying production software that handles real people’s data, their health, their pay, their sensitive personal information, without understanding what you have built is not a new kind of boldness.
It is an old kind of risk, wearing a very convincing UI.
The question worth asking before you ship is not just “does it work?” Ask also: “do I understand it well enough to be responsible for it when it does not?”
This post is part of an ongoing series on AI, technology, and the gap between what we are promised and what we are building.
I write about AI, cybersecurity, and technology every Friday. Subscribe to get it in your inbox.
References
Anthropic. (2025, November). Protecting the Well-Being of Users. https://www.anthropic.com/news/protecting-well-being-of-users
Fawzy, A., Tahir, A., & Blincoe, K. (2025). Vibe Coding in Practice: Motivations, Challenges, and a Future Outlook. arXiv:2510.00328. https://arxiv.org/abs/2510.00328
GitClear. (2024). Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality. https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality
GitGuardian. (2024). The State of Secrets Sprawl 2024. https://www.gitguardian.com/state-of-secrets-sprawl-report-2024
Retool. (2026, March). The Risks of Vibe Coding: Why AI Tools Break Down in Production. https://retool.com/blog/vibe-coding-risks
Schreiber, T., & Tippe, S. (2025). Security Vulnerabilities in AI-Generated Code: A Large-Scale Analysis of Public GitHub Repositories. arXiv:2510.26103. https://arxiv.org/abs/2510.26103
Snyk. (2025). The Highs and Lows of Vibe Coding. https://snyk.io/blog/the-highs-and-lows-of-vibe-coding
Veracode. (2025). AI-Generated Code: A Double-Edged Sword for Developers. https://www.veracode.com/blog/research/ai-generated-code-double-edged-sword-developers
CVE-2025-48757. Supabase Row Level Security misconfiguration in Lovable-generated applications. https://www.cve.org/CVERecord?id=CVE-2025-48757


