How to Fix Security Vulnerabilities with AI Tools (Cursor, Copilot, Claude)
Your AI tool wrote the code. It works. It looks great. It also left the front door wide open.
45% of AI generated code contains OWASP Top 10 vulnerabilities. That's not a scare stat from some anti AI blog, that's Veracode testing 100+ LLMs across 80 coding tasks. If you vibe coded your app with Cursor, Copilot, Lovable, or Bolt, the odds are nearly coin flip that your code has real security holes.
But here's the thing nobody tells you: the same AI tools that created the vulnerabilities can fix them. You just need to ask the right way.
This article gives you the exact workflow and the exact prompts. Scan your app, get the findings, paste the fix prompt into your AI tool, and verify. Five minutes per fix. No security degree required.
The irony: AI created the bugs, AI can fix them
This isn't as contradictory as it sounds. AI tools generate insecure code because of how we prompt them. We say "build me a login page" and the AI builds one, client side, no rate limiting, no CSRF protection. It did exactly what we asked. We just didn't ask for security.
The Databricks AI Red Team tested this directly. They gave LLMs coding tasks with and without security focused prompts. The results:
- Claude 3.5 Sonnet: 60 80% fewer vulnerabilities with security prompts
- GPT 4o: up to 50% improvement
- Even generic "follow security best practices" prompts cut vulnerability rates by half
The AI knows how to write secure code. It just defaults to the fastest path unless you tell it otherwise. And when you give it a specific vulnerability to fix, with context about your stack, the exact issue, and the expected behavior, it's remarkably good at producing the right fix.
The problem has never been capability. It's prompting.
Why "make it secure" doesn't work (and what does)
You've probably tried this. You pasted your code into Cursor or Copilot and typed "make this secure." The AI added a comment that says // TODO: add authentication and moved on.
Vague prompts produce vague results. Here's why:
- "Make it secure": secure against what? XSS? CSRF? SQL injection? All of them? The AI doesn't know what threat model you care about.
- "Add security": the AI might add
helmetto your Express app and call it a day. That's one header middleware out of dozens of fixes you need. - "Fix vulnerabilities": which ones? The AI will guess, and it'll guess wrong.
What works is specific, context rich prompts that tell the AI:
- What the vulnerability is (e.g., "missing Content Security Policy header")
- Where it is (e.g., "in my Next.js middleware or next.config.js")
- What the expected behavior should be (e.g., "block inline scripts except from these domains")
- What your stack is (e.g., "Next.js 14 App Router with Supabase")
That's exactly what AmIHackable generates for you. Scan your URL, get findings with severity ratings, and get a fix prompt for each one, ready to paste into Cursor, Copilot, or Claude Code.
The workflow: scan, prompt, fix, verify
Here's the full loop. It takes about 5 minutes per critical fix.
Step 1: Scan your URL. Go to AmIHackable, paste your URL, wait 60 seconds. You get a security grade and a list of findings ranked by severity.
Step 2: Open the full report. Each finding includes a description of the vulnerability, its impact, and a ready to paste fix prompt tailored to your stack.
Step 3: Copy the fix prompt. Paste it into Cursor (Cmd+K or the chat), Claude Code (terminal), or GitHub Copilot Chat. The prompt includes your framework context so the AI generates code that actually fits your project.
Step 4: Apply the fix. Review the generated code. For most fixes, it's a new file or a modification to an existing config. Apply it.
Step 5: Re scan. Go back to AmIHackable and scan again. Your score should jump. We've seen apps go from grade C to grade A in a single session.
Now let's walk through the five most common fixes with real prompts and real code.
Fix #1: Security headers (full prompt + before/after)
This is the single highest impact, lowest effort fix. Missing security headers account for the majority of findings in our scans, and adding them takes two minutes.
The prompt
Paste this into Cursor, Claude Code, or Copilot Chat:
My Next.js 14 app (App Router) is missing critical security headers.
The scan found: no Content-Security-Policy, no X-Frame-Options,
no Permissions-Policy, no Referrer-Policy, no X-Content-Type-Options.
Add a middleware.ts file at the project root that sets these headers
on all responses:
- Content-Security-Policy: default-src 'self'; script-src 'self';
style-src 'self' 'unsafe-inline'; img-src 'self' data: https:;
font-src 'self'; connect-src 'self' https://*.supabase.co;
frame-ancestors 'none'
- X-Frame-Options: DENY
- X-Content-Type-Options: nosniff
- Referrer-Policy: strict-origin-when-cross-origin
- Permissions-Policy: camera=(), microphone=(), geolocation=()
- Strict-Transport-Security: max-age=31536000; includeSubDomains
Adjust the CSP connect-src to include any API domains I'm using.
Make sure the middleware matches all routes except static files and
_next assets.
Before: no middleware, no headers
// No middleware.ts exists
// All responses go out with default headers only
// Browser has zero instructions on what to trust
After: middleware.ts with full security headers
import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";
export function middleware(request: NextRequest) {
const response = NextResponse.next();
// Content Security Policy
response.headers.set(
"Content-Security-Policy",
"default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; " +
"img-src 'self' data: https:; font-src 'self'; " +
"connect-src 'self' https://*.supabase.co; frame-ancestors 'none'"
);
// Prevent clickjacking
response.headers.set("X-Frame-Options", "DENY");
// Prevent MIME type sniffing
response.headers.set("X-Content-Type-Options", "nosniff");
// Control referrer information
response.headers.set("Referrer-Policy", "strict-origin-when-cross-origin");
// Restrict browser features
response.headers.set(
"Permissions-Policy",
"camera=(), microphone=(), geolocation=()"
);
// Force HTTPS
response.headers.set(
"Strict-Transport-Security",
"max-age=31536000; includeSubDomains"
);
return response;
}
export const config = {
matcher: ["/((?!_next/static|_next/image|favicon.ico).*)"],
};
Two minutes. Copy, paste, deploy. Your app just went from "wide open" to "headers locked down."
Fix #2: Client side auth to server side (full prompt + before/after)
This is the most dangerous vulnerability in vibe coded apps. The AI writes auth checks inside React components, which means anyone with browser DevTools can bypass them entirely. The Lovable incident that exposed 18,697 user records? Client side auth.
The prompt
My app has client-side authentication checks that need to move
server-side. I'm using Next.js 14 App Router with Supabase Auth.
Currently, protected routes check auth status in React components
using useEffect. This is bypassable via browser DevTools.
Refactor to:
1. Create a middleware.ts that checks the Supabase session on every
request to /dashboard/* and /api/* routes
2. Redirect unauthenticated users to /login
3. For API routes, return 401 JSON responses instead of redirects
4. Remove client-side auth guards from page components (they should
still fetch user data for display, but NOT for access control)
5. Use @supabase/ssr for server-side session handling
The auth check MUST happen server-side before the page renders,
not after hydration in the browser.
Before: auth check in a React component
// app/dashboard/page.tsx, INSECURE
"use client";
import { useEffect, useState } from "react";
import { supabase } from "@/lib/supabase";
export default function Dashboard() {
const [user, setUser] = useState(null);
const [loading, setLoading] = useState(true);
useEffect(() => {
// This runs in the browser. Anyone can bypass it.
supabase.auth.getUser().then(({ data }) => {
if (!data.user) {
window.location.href = "/login";
} else {
setUser(data.user);
}
setLoading(false);
});
}, []);
if (loading) return <div>Loading...</div>;
if (!user) return null;
return <div>Welcome, {user.email}</div>;
}
After: server side middleware + clean page component
// middleware.ts, auth happens before the page loads
import { createServerClient } from "@supabase/ssr";
import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";
export async function middleware(request: NextRequest) {
let response = NextResponse.next({ request });
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
{
cookies: {
getAll: () => request.cookies.getAll(),
setAll: (cookiesToSet) => {
cookiesToSet.forEach(({ name, value, options }) => {
response.cookies.set(name, value, options);
});
},
},
}
);
const { data: { user } } = await supabase.auth.getUser();
// Protected routes: redirect if not authenticated
if (request.nextUrl.pathname.startsWith("/dashboard") && !user) {
return NextResponse.redirect(new URL("/login", request.url));
}
// Protected API routes: return 401
if (request.nextUrl.pathname.startsWith("/api/") && !user) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}
return response;
}
export const config = {
matcher: ["/dashboard/:path*", "/api/:path*"],
};
// app/dashboard/page.tsx, clean, no auth logic
import { createServerClient } from "@supabase/ssr";
import { cookies } from "next/headers";
export default async function Dashboard() {
// User is guaranteed to be authenticated by middleware
const cookieStore = await cookies();
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
{ cookies: { getAll: () => cookieStore.getAll() } }
);
const { data: { user } } = await supabase.auth.getUser();
return <div>Welcome, {user?.email}</div>;
}
The auth check now happens on the server before any HTML is sent to the browser. No DevTools bypass, no race conditions, no loading spinner while auth resolves.
Fix #3: Exposed API keys and environment variables
AI tools love putting secrets where they don't belong. The most common mistake: using NEXT_PUBLIC_ or VITE_ prefixes on variables that should never reach the browser. Wiz found literal passwords as JavaScript variables in production vibe coded apps.
The prompt
Audit my codebase for exposed secrets and API keys.
Check for:
1. Any API keys or secrets in client-side code (files in app/,
pages/, components/, or any file marked "use client")
2. Environment variables with NEXT_PUBLIC_ prefix that contain
sensitive values (database URLs, secret keys, service account keys)
3. Hardcoded credentials, passwords, or tokens anywhere in the code
4. .env files committed to git (check .gitignore)
For each finding:
- Move the secret to a server-only environment variable (no
NEXT_PUBLIC_ prefix)
- Create or update an API route that makes the external call
server-side
- Update the client code to call our API route instead of the
external service directly
- Make sure .env and .env.local are in .gitignore
Show me every file that needs to change.
This prompt works especially well in Claude Code, which can scan your entire project directory and find every instance.
Quick checklist after the AI applies changes
- Run
grep -r "sk-" --include="*.ts" --include="*.tsx" --include="*.js"to find any remaining OpenAI keys - Run
grep -r "NEXT_PUBLIC_" .env*and verify none of those values are secrets - Check that
.envand.env.localare in.gitignore - Verify no secrets appear in your browser's network tab or source view
Fix #4: Rate limiting API routes
Your AI built an /api/generate endpoint that calls OpenAI. There's no rate limiting. Someone with a for loop can drain your API credits in minutes. AI tools almost never add rate limiting unless you specifically ask.
The prompt
Add rate limiting to all API routes in my Next.js app.
Requirements:
- Use an in-memory store for development, Redis for production
(check for REDIS_URL env var)
- Limit: 20 requests per minute per IP for general API routes
- Limit: 5 requests per minute per IP for /api/generate and any
route that calls external AI APIs
- Return 429 Too Many Requests with a JSON body:
{ "error": "Rate limit exceeded", "retryAfter": <seconds> }
- Include X-RateLimit-Limit, X-RateLimit-Remaining, and
X-RateLimit-Reset headers in all API responses
- Create a reusable rateLimit() wrapper function I can apply to
any API route
Keep it simple. No external dependencies for the basic version.
The implementation pattern
The AI will typically generate a utility function like this that you wrap around any API route:
// lib/rate-limit.ts
const rateLimit = new Map<string, { count: number; resetTime: number }>();
export function checkRateLimit(
ip: string,
limit: number = 20,
windowMs: number = 60_000
): { allowed: boolean; remaining: number; resetIn: number } {
const now = Date.now();
const record = rateLimit.get(ip);
if (!record || now > record.resetTime) {
rateLimit.set(ip, { count: 1, resetTime: now + windowMs });
return { allowed: true, remaining: limit - 1, resetIn: windowMs / 1000 };
}
if (record.count >= limit) {
const resetIn = Math.ceil((record.resetTime - now) / 1000);
return { allowed: false, remaining: 0, resetIn };
}
record.count++;
return { allowed: true, remaining: limit - record.count, resetIn: Math.ceil((record.resetTime - now) / 1000) };
}
Simple. No dependencies. Prevents the worst abuse. You can swap in Redis later for production scale.
Fix #5: Database permissions
When AI sets up Supabase, it often disables Row Level Security entirely to "make things work." That means any authenticated user, or in some cases, any anonymous visitor, can read, modify, or delete every row in every table.
The prompt
My Supabase database has overly permissive access. Fix the Row
Level Security policies.
Current state: RLS is either disabled or has "allow all" policies
on most tables.
For each table, create RLS policies that:
1. Users can only SELECT their own rows (where user_id = auth.uid())
2. Users can only INSERT rows where user_id = auth.uid()
3. Users can only UPDATE their own rows
4. Users can only DELETE their own rows
5. Service role access remains unrestricted (for server-side
operations)
Tables to secure: [list your tables here]
Generate the SQL migration file. Include commands to:
- Enable RLS on each table
- Drop any existing permissive policies
- Create the new restrictive policies
- Verify with a test query
This is one prompt where you need to customize the table list. But the AI will generate a complete SQL migration that locks everything down.
Key principle
Every database query from the client should go through RLS. Every admin operation should go through a server side API route using the service role key, which is never exposed to the browser.
The meta fix: security first system prompts for your AI tool
The five fixes above handle the damage already done. But the best fix is preventing vulnerabilities in the first place.
Add this to your AI tool's configuration:
For Cursor: add to .cursorrules in your project root:
## Security Requirements (Non-Negotiable)
Every piece of code you generate MUST follow these rules:
1. NEVER put API keys, secrets, or credentials in client-side code
2. ALL authentication and authorization checks happen server-side
(middleware or API routes), never in React/Vue/Svelte components
3. ALL API routes include rate limiting
4. ALL user input is validated and sanitized server-side
5. ALL database queries use parameterized queries or ORM methods,
never string concatenation
6. Include Content-Security-Policy, X-Frame-Options, X-Content-Type-Options,
Referrer-Policy, and Permissions-Policy headers
7. Enable Row Level Security on all database tables
8. Use httpOnly, secure, sameSite cookies for session management
9. Never expose stack traces or internal errors to the client
10. When in doubt, deny access by default
If a user request would require violating these rules, explain the
security risk and provide the secure alternative.
For Claude Code: add to your CLAUDE.md project file:
## Security Rules
- Server-side auth only. Never check auth in client components.
- No secrets in client code. No NEXT_PUBLIC_ for sensitive values.
- Rate limit all API routes.
- Enable RLS on all Supabase tables.
- Add security headers via middleware.
- Validate all input server-side.
- Parameterized queries only.
For GitHub Copilot: add to .github/copilot-instructions.md:
When generating code, always follow OWASP Top 10 guidelines.
Authenticate server-side. Never expose secrets to the browser.
Add rate limiting to API routes. Use parameterized queries.
Include security headers on all responses.
The Databricks research showed that these system level prompts are even more effective than per request security instructions. You set them once and every code generation inherits the security context.
Conclusion: scan, fix, re scan
Here's the complete workflow:
- Scan your app at AmIHackable. 60 seconds. Free.
- Read the report. Each finding has a severity rating and a fix prompt.
- Copy the fix prompt into your AI tool (Cursor, Claude Code, Copilot).
- Apply the generated fix. Review it, AI is good but not infallible.
- Re scan to verify the fix worked. Your score should improve.
- Add the security system prompt to your AI tool config so new code comes out secure from the start.
Most apps go from grade C to grade A in a single session. The five fixes in this article cover the vast majority of what we find in scans. Security headers alone can jump your score by 2 3 points.
Your AI tool created the vulnerabilities because you didn't tell it not to. Now you know how to tell it. The same tool that left the door open can lock it shut, you just need the right prompt.
Ship fast. Ship secure. It doesn't have to be one or the other.
Sources: Veracode GenAI Code Security Report (2025) · Databricks, Passing the Security Vibe Check (2025) · Stanford, Do Users Write More Insecure Code with AI Assistants? (2023) · The Register, Lovable Incident (2026) · Wiz, Common Security Risks in Vibe Coded Apps (2025)
Frequently Asked Questions
- Can AI tools fix security vulnerabilities they created?
- Yes, with the right prompts. Research from Databricks shows that security-focused prompting with Claude 3.5 Sonnet reduces vulnerabilities by 60-80%. The key is giving specific, context-rich prompts rather than vague 'make it secure' instructions.
- What's the best AI tool for fixing security issues?
- Claude Code and Cursor with Claude are strongest for security fixes due to Claude's training on security best practices. GitHub Copilot works well for common patterns. The most important factor is the quality of your prompt, not the tool.
- How do I get security fix prompts for my app?
- Scan your URL with AmIHackable. The full report includes ready-to-paste fix prompts for each vulnerability found, formatted for Cursor, Copilot, and Claude Code.
- How long does it take to fix security vulnerabilities with AI?
- Most critical fixes take 5-15 minutes with AI assistance. Adding security headers is a 2-minute copy-paste. Moving auth server-side takes 10-15 minutes. The AmIHackable report prioritizes fixes so you start with the highest impact.
Your AI writes the code. We find what it missed.
Paste your URL. Security audit in 60 seconds.
Scan my app