Am I Hackable?
Back to Learn

OWASP Top 10 for Vibe Coders: The Only Guide You Need

Benji··15 min read

The OWASP Top 10 is the most referenced security document on the internet. Every security audit mentions it. Every compliance checklist requires it. Every senior developer nods knowingly when you bring it up.

And almost nobody who vibe codes has actually read it.

That's not a dig. The official OWASP documentation is dense, written for security professionals, and assumes you already know the difference between broken access control and broken authentication. If you're building with Cursor, Bolt, or Lovable, you don't need a 200 page PDF. You need the version that tells you what breaks, why your AI tool generates it broken, and how to fix it in 30 seconds.

This is that version.

Why vibe coders need to care about OWASP

Here's the uncomfortable truth: 45% of AI generated code contains OWASP Top 10 vulnerabilities. Not obscure edge cases. The Top 10 the most well known, most documented, most preventable security flaws in web development.

Your AI tool has been trained on millions of code examples, including millions of insecure code examples. It generates what's statistically likely, not what's secure. And because OWASP vulnerabilities are everywhere in training data, they're everywhere in AI output.

The OWASP Top 10 was last updated in 2021. It covers the vast majority of real world attacks. Learn these 10 categories and you'll understand 90% of what goes wrong in web apps including yours.

A01: Broken Access Control

The #1 web vulnerability. And AI tools get it wrong constantly.

What it is

Access control means enforcing rules about who can do what. Can this user view that admin page? Can they edit another user's profile? Can they delete records they don't own?

Broken access control means those rules either don't exist or can be bypassed.

What AI generates

Ask Cursor or Bolt to build a dashboard with admin and user roles. Here's what you'll typically get:

// What Cursor generates: client-side role check
function AdminDashboard() {
  const { user } = useAuth();

  if (user.role !== 'admin') {
    return <Navigate to="/unauthorized" />;
  }

  return (
    <div>
      <h1>Admin Dashboard</h1>
      <UserList />
      <DeleteAllButton />
    </div>
  );
}

This looks like it works. The admin page redirects non admins. Ship it, right?

Wrong. That check runs in the browser. The API endpoints behind UserList and DeleteAllButton have no protection at all. Anyone can open DevTools, call fetch('/api/admin/users'), and get every user record. Or call the delete endpoint directly.

The fix

Access control must live on the server. Always.

// Server-side middleware (Next.js API route example)
export async function GET(request) {
  const session = await getServerSession(authOptions);

  if (!session || session.user.role !== 'admin') {
    return new Response('Forbidden', { status: 403 });
  }

  const users = await db.users.findMany();
  return Response.json(users);
}

AI failure rate: Very high. Wiz found that vibe coded apps almost universally rely on client side auth checks. This was the root cause of the Lovable incident that exposed 18,697 user records.

A02: Cryptographic Failures

What it is

Sensitive data that isn't properly protected in transit or at rest. This includes passwords stored in plain text, data sent over HTTP instead of HTTPS, weak encryption algorithms, or hardcoded encryption keys.

What AI generates

// AI-generated user registration
app.post('/register', async (req, res) => {
  const { email, password } = req.body;
  await db.users.create({
    data: { email, password } // Plain text password
  });
  res.json({ message: 'User created' });
});

Or this classic:

const SECRET_KEY = "my-super-secret-key-123";
const token = jwt.sign(payload, SECRET_KEY);

Hardcoded secrets. In your source code. Pushed to GitHub. Indexed by bots within minutes.

The fix

import bcrypt from 'bcrypt';

app.post('/register', async (req, res) => {
  const { email, password } = req.body;
  const hashedPassword = await bcrypt.hash(password, 12);
  await db.users.create({
    data: { email, password: hashedPassword }
  });
  res.json({ message: 'User created' });
});

For secrets: use environment variables, never hardcode them, and never prefix them with NEXT_PUBLIC_ or VITE_ unless they're truly meant to be public.

AI failure rate: Moderate. AI usually hashes passwords if you mention "authentication" in the prompt. But it almost always hardcodes JWT secrets and encryption keys.

A03: Injection

What it is

Injection happens when user input gets treated as code. SQL injection, XSS (cross site scripting), command injection they're all the same fundamental problem: untrusted data flowing into an interpreter without sanitization.

What AI generates

// AI-generated search endpoint
app.get('/search', async (req, res) => {
  const { query } = req.query;
  const results = await db.$queryRaw(
    `SELECT * FROM products WHERE name LIKE '%${query}%'`
  );
  res.json(results);
});

A user sends '; DROP TABLE products; -- as the query and your database is gone.

On the frontend, XSS is even more common:

// AI-generated comment display
function Comment({ text }) {
  return <div dangerouslySetInnerHTML={{ __html: text }} />;
}

Veracode found that 86% of AI generated frontend code fails to sanitize output properly. That dangerouslySetInnerHTML is a direct pipeline for attackers to inject scripts into your page.

The fix

Use parameterized queries. Always.

// Parameterized query - safe
const results = await db.$queryRaw(
  Prisma.sql`SELECT * FROM products WHERE name LIKE ${`%${query}%`}`
);

For frontend rendering, let your framework handle escaping (React does this by default unless you use dangerouslySetInnerHTML). If you must render HTML, use a sanitization library like DOMPurify.

AI failure rate: High. This is one of the oldest vulnerability categories and AI still generates raw string concatenation in database queries regularly.

A04: Insecure Design

What it is

This is the category that catches vibe coders off guard because it's not about a specific bug it's about missing security thinking in the design phase. No threat modeling, no abuse case consideration, no security requirements.

What AI generates

Ask an AI to build a password reset flow. You'll get something like:

// AI-generated password reset
app.post('/reset-password', async (req, res) => {
  const { email } = req.body;
  const resetCode = Math.floor(1000 + Math.random() * 9000); // 4-digit code
  await sendEmail(email, `Your reset code: ${resetCode}`);
  await db.resetCodes.create({ data: { email, code: resetCode.toString() } });
  res.json({ message: 'Code sent' });
});

A 4 digit code. No expiration. No rate limiting on verification attempts. An attacker can brute force all 9,000 possible codes in under a minute.

The fix

import crypto from 'crypto';

app.post('/reset-password', async (req, res) => {
  const { email } = req.body;
  const resetToken = crypto.randomBytes(32).toString('hex');
  const expiry = new Date(Date.now() + 15 * 60 * 1000); // 15 minutes

  await db.resetTokens.create({
    data: { email, token: resetToken, expiresAt: expiry }
  });

  await sendEmail(email, `Reset link: https://yourapp.com/reset?token=${resetToken}`);
  res.json({ message: 'If that email exists, a reset link was sent' });
});

Notice the response message doesn't confirm whether the email exists. That's insecure design thinking the kind AI never adds on its own.

AI failure rate: Very high. AI doesn't think about abuse cases. It builds the happy path and nothing else.

A05: Security Misconfiguration

What it is

Default configurations that are insecure. Missing security headers, open cloud storage buckets, verbose error messages that leak stack traces, unnecessary features enabled.

What AI generates

This one is less about what AI writes and more about what it doesn't write. Ask Cursor to build a Next.js app and it won't add any of these:

// What's missing from every AI-generated app
const securityHeaders = [
  { key: 'Content-Security-Policy', value: "default-src 'self'" },
  { key: 'X-Frame-Options', value: 'DENY' },
  { key: 'X-Content-Type-Options', value: 'nosniff' },
  { key: 'Referrer-Policy', value: 'strict-origin-when-cross-origin' },
  { key: 'Permissions-Policy', value: 'camera=(), microphone=(), geolocation=()' },
  { key: 'Strict-Transport-Security', value: 'max-age=63072000; includeSubDomains' },
];

AI also loves leaving debug mode on, exposing detailed error messages, and deploying with default CORS policies that allow everything (Access-Control-Allow-Origin: *).

The fix

Add security headers to your framework config. In Next.js:

// next.config.js
const nextConfig = {
  async headers() {
    return [
      {
        source: '/(.*)',
        headers: [
          { key: 'X-Frame-Options', value: 'DENY' },
          { key: 'X-Content-Type-Options', value: 'nosniff' },
          { key: 'Referrer-Policy', value: 'strict-origin-when-cross-origin' },
          { key: 'Permissions-Policy', value: 'camera=(), microphone=(), geolocation=()' },
        ],
      },
    ];
  },
};

AI failure rate: Nearly 100%. Security headers don't make features work, so AI never adds them. This is the single most common finding in AmIHackable scans.

A06: Vulnerable and Outdated Components

What it is

Using libraries, frameworks, or dependencies with known vulnerabilities. That lodash@4.17.15 in your package.json has a prototype pollution vulnerability. That old version of express has a path traversal bug.

What AI generates

AI tools are trained on data with a cutoff. They suggest package versions that were current during training not today. They also pull in heavy dependencies for simple tasks:

{
  "dependencies": {
    "moment": "^2.29.1",
    "lodash": "^4.17.15",
    "request": "^2.88.0"
  }
}

moment is unmaintained. lodash@4.17.15 has known CVEs. request has been deprecated since 2020. AI suggests them all because they dominate the training data.

The fix

# Check for known vulnerabilities
npm audit

# Update dependencies
npm update

# Use modern alternatives
# moment -> dayjs or date-fns
# lodash -> native JS methods or lodash-es
# request -> fetch (built-in) or undici

Run npm audit before every deployment. It takes 5 seconds and catches the obvious stuff.

AI failure rate: Moderate. The AI doesn't intentionally pick vulnerable versions, but its training data skews toward older, more popular (and often vulnerable) packages.

A07: Identification and Authentication Failures

What it is

Weak passwords allowed, broken session management, credential stuffing unprotected, missing multi factor authentication, session tokens in URLs.

What AI generates

// AI-generated login with no protections
app.post('/login', async (req, res) => {
  const { email, password } = req.body;
  const user = await db.users.findUnique({ where: { email } });

  if (!user || !(await bcrypt.compare(password, user.password))) {
    return res.status(401).json({ error: 'Invalid credentials' });
  }

  const token = jwt.sign({ userId: user.id }, process.env.JWT_SECRET);
  res.json({ token });
});

No rate limiting on login attempts. No account lockout. No check for compromised passwords. The JWT has no expiration. Session management consists of "here's a token, good luck."

The fix

import rateLimit from 'express-rate-limit';

const loginLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 5, // 5 attempts per window
  message: 'Too many login attempts. Try again in 15 minutes.'
});

app.post('/login', loginLimiter, async (req, res) => {
  const { email, password } = req.body;
  const user = await db.users.findUnique({ where: { email } });

  if (!user || !(await bcrypt.compare(password, user.password))) {
    return res.status(401).json({ error: 'Invalid credentials' });
  }

  const token = jwt.sign(
    { userId: user.id },
    process.env.JWT_SECRET,
    { expiresIn: '1h' } // Token expires
  );

  res.cookie('token', token, {
    httpOnly: true,   // Not accessible via JavaScript
    secure: true,     // HTTPS only
    sameSite: 'strict'
  });

  res.json({ message: 'Logged in' });
});

AI failure rate: High. AI generates functional auth but almost never adds rate limiting, token expiration, or secure cookie settings.

A08: Software and Data Integrity Failures

What it is

Code and infrastructure that doesn't verify the integrity of software updates, critical data, or CI/CD pipelines. This includes using CDN scripts without integrity checks, auto updating dependencies without verification, and insecure deserialization.

What AI generates

<!-- AI-generated: external script without integrity check -->
<script src="https://cdn.jsdelivr.net/npm/some-library@3.2.1/dist/lib.min.js"></script>

If that CDN gets compromised, every user of your app executes the attacker's code. AI also generates CI/CD configs that pull dependencies without pinning versions or checking checksums.

The fix

<!-- With Subresource Integrity (SRI) -->
<script
  src="https://cdn.jsdelivr.net/npm/some-library@3.2.1/dist/lib.min.js"
  integrity="sha384-abc123..."
  crossorigin="anonymous"
></script>

Pin your dependency versions. Use lockfiles (package-lock.json). Verify signatures on critical updates. Don't blindly trust external resources.

AI failure rate: Very high. AI virtually never adds SRI hashes to external scripts. It also generates npm install commands without lockfile awareness.

A09: Security Logging and Monitoring Failures

What it is

No logs for security events. No alerting when someone tries 10,000 passwords. No audit trail for who accessed what. When you get breached, you have no idea what happened or when.

What AI generates

Nothing. That's the problem. Ask AI to build a login system and it won't add logging. Ask for an API and you won't get audit trails. Monitoring is never part of "make it work."

At best, you'll get console.log('User logged in') which disappears the moment the server restarts.

The fix

import winston from 'winston';

const securityLogger = winston.createLogger({
  level: 'info',
  format: winston.format.json(),
  transports: [new winston.transports.File({ filename: 'security.log' })],
});

// Log failed login attempts
app.post('/login', async (req, res) => {
  const { email, password } = req.body;
  const user = await db.users.findUnique({ where: { email } });

  if (!user || !(await bcrypt.compare(password, user.password))) {
    securityLogger.warn('Failed login attempt', {
      email,
      ip: req.ip,
      timestamp: new Date().toISOString(),
    });
    return res.status(401).json({ error: 'Invalid credentials' });
  }

  securityLogger.info('Successful login', {
    userId: user.id,
    ip: req.ip,
    timestamp: new Date().toISOString(),
  });

  // ... issue token
});

AI failure rate: Nearly 100%. Security logging is completely absent from AI generated code unless you specifically prompt for it.

A10: Server Side Request Forgery (SSRF)

What it is

Your server makes HTTP requests based on user input and an attacker uses that to access internal services, cloud metadata endpoints, or other resources that should be unreachable from the outside.

What AI generates

// AI-generated URL preview feature
app.post('/preview', async (req, res) => {
  const { url } = req.body;
  const response = await fetch(url);
  const html = await response.text();
  const title = extractTitle(html);
  res.json({ title, url });
});

An attacker sends http://169.254.169.254/latest/meta-data/ (AWS metadata endpoint) and gets your cloud credentials. Or http://localhost:3000/admin/delete-all to hit internal endpoints.

The fix

import { URL } from 'url';

const BLOCKED_HOSTS = ['localhost', '127.0.0.1', '0.0.0.0', '169.254.169.254'];
const ALLOWED_PROTOCOLS = ['http:', 'https:'];

app.post('/preview', async (req, res) => {
  const { url } = req.body;

  try {
    const parsed = new URL(url);

    if (!ALLOWED_PROTOCOLS.includes(parsed.protocol)) {
      return res.status(400).json({ error: 'Invalid protocol' });
    }

    if (BLOCKED_HOSTS.includes(parsed.hostname) || parsed.hostname.endsWith('.internal')) {
      return res.status(400).json({ error: 'URL not allowed' });
    }

    const response = await fetch(url, { redirect: 'manual' }); // Don't follow redirects
    const html = await response.text();
    const title = extractTitle(html);
    res.json({ title, url });
  } catch {
    res.status(400).json({ error: 'Invalid URL' });
  }
});

AI failure rate: Very high. AI generated "URL preview" or "link unfurl" features almost never validate the target URL. If your app fetches user provided URLs for any reason, you likely have an SSRF vulnerability.

The cheat sheet: which ones does AI get wrong most?

Ranked by how often AI tools generate vulnerable code for each category:

RankOWASP CategoryAI Failure RateWhy
1A05: Security Misconfiguration~100%AI never adds security headers or hardens configs
2A09: Logging & Monitoring~100%AI doesn't generate logging unless asked
3A08: Integrity Failures~95%AI never adds SRI hashes or pins dependencies
4A01: Broken Access Control~85%AI defaults to client side auth checks
5A04: Insecure Design~85%AI builds happy paths, not abuse resistant flows
6A10: SSRF~80%AI doesn't validate user provided URLs
7A03: Injection~75%XSS at 86%, SQL injection lower with ORMs
8A07: Auth Failures~70%Functional auth, no hardening
9A02: Cryptographic Failures~50%Passwords often hashed, but keys hardcoded
10A06: Outdated Components~40%Depends on training data freshness

What to do right now

You don't need to memorize this list. You need to do three things:

1. Scan your app. Paste your URL into AmIHackable and get a concrete report in 60 seconds. It checks for the most common OWASP issues missing headers, injection points, auth flaws, misconfigurations and tells you exactly what to fix.

2. Add this to your AI tool's system prompt:

When generating code, follow OWASP Top 10 guidelines:
- All access control checks must be server-side, never client-only
- Hash passwords with bcrypt (cost 12+), never store plain text
- Use parameterized queries, never string concatenation
- Consider abuse cases, not just the happy path
- Add security headers (CSP, X-Frame-Options, HSTS)
- Pin dependency versions, add SRI for external scripts
- Add rate limiting to auth endpoints
- Never expose secrets in client-side code
- Log security events (failed logins, access denied, etc.)
- Validate and sanitize all user-provided URLs

3. Fix what matters first. If your scan shows missing security headers and client side auth, those are your top two priorities. Headers take 5 minutes to add. Moving auth server side takes longer but prevents the worst breaches.

The OWASP Top 10 hasn't fundamentally changed since 2021 because the same mistakes keep happening. AI tools are making them faster than ever. But knowing what to look for is 80% of the battle.

Now you know. Ship accordingly.


References: OWASP Top 10 (2021) · Veracode GenAI Code Security Report (2025) · Stanford Do Users Write More Insecure Code with AI Assistants? (2023) · Wiz Common Security Risks in Vibe Coded Apps (2025) · The Register Lovable Incident (2026)

Frequently Asked Questions

What is the OWASP Top 10?
The OWASP Top 10 is a standard awareness document for web application security. It represents the 10 most critical security risks to web applications, updated periodically by the Open Worldwide Application Security Project (OWASP).
Does AI-generated code have OWASP vulnerabilities?
Yes. Research from Veracode shows that 45% of AI-generated code contains OWASP Top 10 vulnerabilities. XSS failures appear in 86% of AI-generated frontend code.
How do I check if my app has OWASP vulnerabilities?
Scan your live URL with AmIHackable for a 60-second security audit. It checks for the most common OWASP issues including missing security headers, injection vulnerabilities, and authentication flaws.
Which OWASP vulnerabilities are most common in vibe-coded apps?
Based on scan data, the most common are: Broken Access Control (client-side auth), Security Misconfiguration (missing headers), and Identification and Authentication Failures (weak session handling).

Your AI writes the code. We find what it missed.

Paste your URL. Security audit in 60 seconds.

Scan my app