Vibe Coding Reality Check: When Building Fast Apps Meets Security Nightmares (And What I Learned the Hard Way)
My journey from $4.25 weekend projects to understanding why security can't be an afterthought in the age of AI-powered development
Hey digital adventurers! You know what's been keeping me up at night lately? (Besides my usual late-night coding sessions, of course...) It's this uncomfortable realization that I might have been the poster child for everything my co-author from AI Security Center is about to warn you about!
Let me paint you a picture here... and honestly, it's not pretty when you really think about it.
Remember when I wrote about building that QR code generator for just $4.25 using Replit? I was SO proud of that moment! Here I was, at 1 AM, cranking out a functional web app in under two hours, deploying it live, and feeling like I'd just discovered fire. The whole experience was intoxicating... I could literally go from "hey, wouldn't it be cool if..." to having a working app that people could actually use.
But here's what I WASN'T thinking about at 1 AM: was that code secure? Did I validate inputs properly? Was I exposing any vulnerabilities? Honestly... I had no idea. I was operating on pure "does it work?" energy, not "is it safe?" awareness.
And it gets worse! I've been actively encouraging this approach! When I wrote about product owners becoming their own technical co-founders, I was basically saying "hey, you don't need to wait for developers, just build it yourself with AI help!" I was so focused on breaking down barriers and accelerating development that I completely glossed over... well, everything my co-author is about to tell you.
The thing is, I've built DOZENS of these little apps and tools over the past year. My Excel helper, various automation tools, that interactive portfolio site... all built with what I now realize was a pretty cavalier attitude toward security.
My typical development process? It went something like this:
Have an idea (usually late at night)
Open Claude or ChatGPT and say "build me an app that does X"
Copy the code, maybe tweak it slightly
Test it quickly - does it do what I want?
Deploy it live because hey, it works!
Write a blog post about how fast and easy it was
Notice what's missing from that process? Pretty much everything related to security! No code review, no vulnerability scanning, no threat modeling... I was basically the definition of what security experts probably have nightmares about.
And the crazy part? I was getting RESULTS! My rapid development approach was genuinely working. I was building useful tools, solving real problems, and doing it all at a speed that would have been impossible just a few years ago. When I wrote about finding the AI sweet spot, I was all about embracing these powerful new capabilities.
But you know what I wasn't considering? The fact that my "move fast and break things" mentality might actually be... well, breaking things. Important things. Security things.
The wake-up call started coming in the form of reader questions. People would email me asking about security best practices for the apps I was showing them how to build, and I'd find myself giving answers like "well, it's probably fine if you're just using it internally" or "the platform handles most of that stuff automatically." Classic non-answers from someone who clearly didn't know what they didn't know!
Then I started really thinking about some of the code I'd generated and deployed. How much of it was actually original? How much might have been "inspired by" existing code that AI had seen during training? I honestly have no idea! When I wrote about technical skills for e-commerce professionals, I talked about understanding technical concepts... but I wasn't practicing what I preached when it came to security.
The automation angle makes this even scarier. I've built all these Make.com workflows and AI knowledge systems that connect to various APIs and data sources. Each connection point is potentially a security risk I haven't properly evaluated!
And here's the really uncomfortable part... I've been the poster child for "shadow AI" development without even realizing it. I'm not working in a traditional enterprise environment, but I've been using whatever online AI tools seemed convenient, generating code without proper oversight, and deploying it without following any established security protocols.
Even worse? I've been ENCOURAGING others to do the same! When people read my posts about building internal digital solutions fast, they're probably following the same fast-and-loose approach I've been modeling.
The truth is, I fell in love with the speed and possibility of AI-assisted development, but I completely ignored the responsibility that comes with it. It's like I discovered I could drive really fast but never learned about traffic laws or safety equipment.
So when my co-author from AI Security Center reached out about collaborating on this topic, I jumped at the chance... partly because I was excited to learn, but mostly because I suspected I needed a serious reality check about my own practices.
What you're about to read from him is going to make you understand exactly why my "vibe coding" approach, while exciting and productive, might be creating risks I don't even know how to identify yet. And honestly? I'm both grateful and terrified to find out what I've been missing.
What expert in Security say about Vive Coding? I asked from AI Security Center to help me understand it from his perspective!
Vibe coding opens up the development world to people with little to no coding experience. This creates huge opportunities, but also introduces security risks.
AI coding assistants—and ChatGPT-like applications—can generate code based on natural language prompts. This means you don’t really need to know how to code; you just need to know what you want and express it in words. AI will generate the code for you, and you are free to deploy it. However, the vast majority of users lack even basic knowledge of the Software Development Lifecycle (SDLC), which is the framework used to encompass the activities that make an application safe. These include activities like static and dynamic code scanning, threat modeling, vulnerability testing, malware scanning, and a properly documented coding process.
Simply put, regular users may generate code that isn’t necessarily safe—and wouldn’t even know it. This code can then be deployed as an internet-facing application, potentially containing open vulnerabilities.
Code assistants aren’t typically geared toward security either. Like many AI tools, they’re optimized for providing quick and satisfactory answers, not secure code. Some of these tools may respond appropriately when explicitly prompted to generate secure code; others may not. Either way, without the knowledge to identify secure vs. insecure code, the user is left in the dark.
Another issue with coding assistants is that they may generate proprietary code. Users are often unaware of this and may go on to deploy commercial applications, which could infringe on copyright and expose them or their company to legal liability.
In an enterprise environment, the problem is amplified by a phenomenon we call "shadow AI"—a spin-off of "shadow IT." In security, shadow IT refers to IT resources not vetted by the IT department—like a Wi-Fi router hidden under a desk or access to an unmonitored internet line. In the case of AI, this could mean using unapproved online tools as coding assistants, enabling otherwise untrained users to generate code. Users often take this path to bypass cumbersome IT and security processes in an effort to simplify their work.
But this can introduce vulnerabilities into the enterprise ecosystem. It’s a double-edged sword: on one hand, non-developers are writing code; on the other, AI coding tools are freely available online.
Users may even be tricked by these tools into deploying code that goes beyond their initial request. The generated application could contain additional logic designed to exfiltrate data or introduce a vulnerability—essentially a trojan horse waiting to be exploited.
There are multiple risks to vibe coding, but the core issue is enabling non-technical users to generate potentially insecure code. In the near future, we may see vibe-coded applications become primary targets for vulnerability-scanning bots across the internet. In enterprise settings, unintentional damage could result from the use of proprietary code snippets or the deployment of vulnerable applications exposed to external threats.
Well... damn. Reading through my co-author's analysis is like looking in a mirror that shows you all the things you've been pretending not to see.
The "shadow AI" concept?
That's literally been my entire approach to development! I've been that person using whatever online AI tools seemed convenient, generating code without really understanding the implications, and just... hoping it would all work out fine. The scary part is how NORMAL this felt to me. Of course I'd use the best AI tools available! Of course I'd deploy code that worked! What could go wrong?
Everything, apparently.
When he talks about users being "left in the dark" about secure vs insecure code... yep, that's me. I can spot obvious problems - like if something crashes or doesn't function - but subtle security vulnerabilities? I wouldn't recognize them if they came with flashing warning signs. And that's terrifying when you think about it, because I've been building and deploying code for months without this knowledge.
The proprietary code issue hits particularly hard. How many times have I asked Claude or ChatGPT to "build me something like X popular app" without thinking about whether the generated code might be borrowing from copyrighted sources? I honestly have no idea, and that's a problem I need to solve immediately(BUT TBH, I usually I try to describe features, not apps that already implemented them).
But here's what really gets me... the "trojan horse" concept. The idea that AI might generate code that goes beyond what I requested, potentially including malicious logic I wouldn't even notice? This keeps me up at night now! Because how would I know? I'm usually so focused on whether the app WORKS that I'm not carefully auditing every line of code for unexpected functionality.
I think about all those times I wrote about building things fast and encouraging others to embrace rapid development, and I realize I was essentially saying "hey, let's all drive really fast cars without learning about brakes or safety features!"
The vulnerability-scanning bots targeting vibe-coded applications? That's going to be a real thing, isn't it? All these apps built by people like me, who prioritized speed over security, are going to become easy targets. It's like we've been building houses with really cool front doors but forgetting to install locks.
So what am I doing about this reality check? Well, first... taking a deep breath and admitting I need to completely rethink my approach. The speed and agility of AI-assisted development is still amazing, but I need to build security awareness into the process from the ground up.
Here's my new development framework (still evolving, but this is where I'm starting):
Before I write any code:
Actually think through what data the app will handle and what could go wrong
Research basic security requirements for the type of application I'm building
Consider who might use this and how it could be misused
During development:
Ask AI explicitly about security implications, not just functionality
Request explanations for any code I don't fully understand
Build in basic input validation and error handling from the start
Before deployment:
Actually review the generated code line by line (I know, revolutionary concept!)
Test edge cases and potential failure modes
Consider whether this should be public-facing or restricted access
After deployment:
Monitor for unusual activity or errors
Keep dependencies updated (something I've been terrible about)
Actually document what the app does and how it works
I'm also committed to learning more about basic web security principles. Not becoming a security expert overnight, but at least understanding enough to make informed decisions. When I wrote about the tech edge in e-commerce, I talked about technical skills becoming essential... well, security awareness is clearly part of that technical foundation I need to build.
The goal isn't to stop experimenting or slow down innovation - it's to be smarter about both. I still believe in the power of rapid prototyping and AI-assisted development, but I need to balance that enthusiasm with actual responsibility.
And for everyone who's been following my rapid development journey... I'm sorry if I've been encouraging reckless practices without proper context. The tools and techniques I've shared are still valuable, but they need to be used with much more awareness of security implications than I've been modeling.
Moving forward, you'll see me writing more about security considerations in my development posts, partnering with experts like my co-author here, and being much more explicit about the limitations and risks of the vibe coding approach.
Because here's the thing - democratizing development through AI is still an incredible opportunity. But with great power comes great responsibility, and I clearly haven't been taking that responsibility seriously enough.
What about you? Are you building apps with AI assistance? Have you been thinking about security, or have you been focused on functionality like I was? I'd love to hear about your experiences... especially if you've figured out ways to maintain development speed while actually being responsible about security!
PS. How do you rate today's email? Leave a comment or "❤️" if you liked the article - I always value your comments and insights, and it also gives me a better position in the Substack network.
Also, big thank you to
for his input! Sometimes I am doing things and I loose very valuable perspective - so it’s jut great to share and learn! Please consider following & subscribing to:
This is very useful, especially for non-coders who want to build public MVPs with AI