Am I Hackable?
Back to Learn

Vibe Coding Security: The Complete Guide for 2026

Benji··12 min read

On February 2, 2025, Andrej Karpathy — the guy who led AI at Tesla and co-founded OpenAI — posted a tweet that got 4.5 million views:

"There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists."

He described talking to Cursor through voice, accepting every change without reading diffs, and copy-pasting errors back to the AI until things worked.

That tweet became a movement. "Vibe coding" was named Collins Dictionary's Word of the Year 2025. Cursor hit $2 billion in annual revenue by February 2026. Lovable crossed 8 million users creating 100,000+ new projects every day.

We're building faster than ever. But there's a problem nobody talks about at the prompt.

45% of AI-generated code contains OWASP Top 10 vulnerabilities. And most vibe coders have no idea.

TL;DR

The data: how bad is it really?

Veracode tested 100+ LLMs. The results aren't great.

In July 2025, Veracode published a study testing over 100 LLMs across 80 coding tasks. The headline number: 45% of AI-generated code introduces OWASP Top 10 vulnerabilities.

The breakdown by vulnerability type:

The uncomfortable finding: larger, more capable models don't perform significantly better. This isn't a scaling problem that GPT-5 will fix. It's a systemic issue with how LLMs generate code.

Stanford proved AI makes developers overconfident

A Stanford study by Neil Perry, Megha Srivastava, and Dan Boneh gave 47 developers coding tasks — some with AI assistants, some without.

The AI-assisted group wrote less secure code on 4 out of 5 tasks. On one task (message signing), only 3% of AI users wrote secure code versus 21% without AI.

But here's the real kicker: AI users were more likely to believe their insecure code was secure. The tool didn't just fail to help — it created false confidence.

This is the core danger of vibe coding. You're not just skipping security. You're convinced you don't need it.

After 5 revisions, code gets worse

Kaspersky's research found something counterintuitive: after 5 iterative code revisions with AI, code contained 37% more critical vulnerabilities than the initial version.

Every time you say "fix this error" or "make it work", the AI patches the symptom and often introduces a new vulnerability. The code compiles — 90% of the time now — but compiling and being secure are very different things.

Real incidents: when vibe coding goes wrong

The Lovable incident: 18,697 users exposed

In February 2026, security researcher Taimur Khan found 16 vulnerabilities (6 critical) in a single Lovable-hosted app. An exam platform had exposed 18,697 user records, including 14,928 email addresses and 870 users with full PII.

The best part? The AI-generated authentication logic was literally backwards:

"The guard blocks the people it should allow and allows the people it should block. A classic logic inversion that a human security reviewer would catch in seconds." — Taimur Khan

A broader scan of 1,645 Lovable apps found 170 with critical flaws. That's roughly 1 in 10.

Wiz found passwords in JavaScript. Literally.

Wiz's security research team analyzed apps built on vibe-coding platforms and found four systemic patterns:

  1. Passwords as JavaScript variables — they found literals like "welcometoredacted" and "marketingdocs2025" as client-side auth checks. Anyone with browser DevTools can see them.
  2. API keys in client-side code — OpenAI sk-proj- keys, Supabase anon keys, all exposed in the browser.
  3. Database rules that allow everything — overly permissive RLS policies or disabled access controls entirely.
  4. Internal tools deployed publicly — admin dashboards with no authentication whatsoever.

Their conclusion: 1 in 5 organizations on vibe-coding platforms face systemic security risks.

The tools themselves are vulnerable

Security researcher Ari Marzouk spent six months investigating AI coding tools and disclosed 30+ vulnerabilities resulting in 24 CVEs in December 2025. Affected tools: Cursor, GitHub Copilot, Windsurf, Zed.dev, Roo Code, Junie, and Cline.

The scariest ones:

The root cause: hidden instructions in README files and config files can hijack AI agents. The AI can't distinguish trusted from untrusted input. You clone a repo, your AI reads the README, and it's game over.

The 5 most common vulnerabilities in vibe-coded apps

Based on the research above and our own scan data (176+ scans, average score: 3.7/10), here are the issues we see the most:

1. Missing security headers

Almost every vibe-coded app ships without Content-Security-Policy, X-Frame-Options, or Permissions-Policy. AI tools don't add these because you didn't ask for them — and they're not part of "make my app work."

Why it matters: Without CSP, your app is wide open to XSS attacks. Without X-Frame-Options, it can be embedded in malicious iframes for clickjacking.

2. Client-side authentication

The AI writes auth checks in React/Vue/Svelte components instead of server-side middleware. A user with DevTools can bypass every "protected" route.

Why it matters: Your admin dashboard, user data, and premium features are accessible to anyone who knows how to open the browser console.

3. Exposed environment variables and API keys

AI tools often put API keys directly in client-side code. Even when they use .env files, they sometimes prefix variables with NEXT_PUBLIC_ or VITE_ (which exposes them to the browser) when they shouldn't.

Why it matters: Your OpenAI key, your database credentials, your Stripe secret key — all visible in the browser's network tab.

4. Overly permissive database access

When AI sets up Supabase, Firebase, or similar backends, it often disables Row Level Security or creates policies that allow all operations. "Make it work" means "let everyone in."

Why it matters: Any user can read, modify, or delete any other user's data. One curl command is all it takes.

5. No rate limiting

AI-generated API routes rarely include rate limiting. Your endpoints are wide open for abuse — automated scanning, credential stuffing, or just running up your AI API bill.

Why it matters: Someone can hit your /api/generate endpoint 10,000 times and drain your OpenAI credits overnight.

How to fix it: practical steps

Step 1: Know where you stand

You can't fix what you can't see. Scan your app and get a concrete list of what's wrong.

That's literally why we built AmIHackable. Paste your URL, get your score and findings in 60 seconds. The full report includes fix prompts you can paste directly into your AI tool.

Step 2: Add security prompts to your workflow

The Databricks AI Red Team tested security-focused prompting and found it dramatically reduces insecure code:

Even generic prompts help. Kaspersky found that simply adding "make sure the code follows security best practices" reduced vulnerability rates by half.

Here's a prompt you can add to your AI tool's system instructions:

When generating code, always:
- Validate and sanitize all user input server-side
- Never expose API keys or secrets in client-side code
- Implement authentication and authorization server-side, not in UI components
- Add Content-Security-Policy, X-Frame-Options, and other security headers
- Use parameterized queries, never string concatenation for database queries
- Add rate limiting to all API endpoints
- Follow OWASP Top 10 guidelines

Step 3: Fix the critical stuff first

Don't try to fix everything at once. Priority order:

  1. Move auth server-side — if your authentication logic is in React components, move it to API routes or middleware
  2. Remove exposed secrets — grep your codebase for API keys, move them to server-side environment variables
  3. Add security headers — this is usually 10 lines of config in your framework
  4. Lock down your database — enable RLS, restrict permissions to what each user actually needs
  5. Add rate limiting — even a basic IP-based limiter prevents the worst abuse

Step 4: Re-scan and verify

After fixing, scan again. Your score should jump. We've seen apps go from grade C to grade A in under 5 minutes — we did it on our own site.

The good news

Vibe coding isn't going away, and it shouldn't. Building with AI is genuinely powerful. The problem isn't the tools — it's the gap between "it works" and "it's secure."

That gap is fixable. A security prompt in your AI config, a 60-second scan before you deploy, and fixing the top 3 issues takes less time than debugging a CSS layout.

Ship fast. But know what you shipped.


Sources: Veracode GenAI Code Security Report (2025) · Stanford — Do Users Write More Insecure Code with AI Assistants? (2023) · The Register — Lovable Incident (2026) · Wiz — Common Security Risks in Vibe Coded Apps (2025) · Kaspersky — Vibe Coding Risks (2025) · Databricks — Passing the Security Vibe Check (2025) · IDEsaster — 30 CVEs in AI IDEs (2025) · Andrej Karpathy — Original Tweet (2025)

Frequently Asked Questions

What is vibe coding?
Vibe coding is a term coined by Andrej Karpathy in February 2025. It describes a coding approach where you give prompts to an AI tool (Cursor, Bolt, Lovable) and accept the generated code without fully reviewing it. You 'give in to the vibes' and let the AI handle the implementation.
Is vibe coding dangerous?
It can be. Research from Veracode shows that 45% of AI-generated code contains OWASP Top 10 vulnerabilities. A Stanford study found that developers using AI assistants wrote less secure code — and were more likely to believe their code was secure when it wasn't.
What are the most common security issues in vibe-coded apps?
Based on research from Wiz and our own scan data (average score: 3.7/10), the most common issues are: missing security headers, client-side authentication logic, exposed API keys in JavaScript, overly permissive database access, and hardcoded credentials.
How do I secure my vibe-coded app?
Start by scanning your app with a tool like AmIHackable to identify vulnerabilities. Then fix the critical issues first: add security headers, move authentication server-side, remove exposed API keys, and tighten database permissions. Use security-focused prompts when generating code with your AI tool.
Are AI coding tools themselves secure?
Not always. In December 2025, researcher Ari Marzouk disclosed 30+ vulnerabilities across Cursor, GitHub Copilot, Windsurf, and other AI coding tools, resulting in 24 CVEs. These included remote code execution and persistent backdoor vulnerabilities.

Your AI writes the code. We find what it missed.

Paste your URL. Security audit in 60 seconds.

Scan my app