The Vibe Code Security Guide
20 things that sink AI-built apps. Each vulnerability explained: what it is, why AI produces it, how to check if you're affected, and how to fix it.
AI code generators are incredible at shipping features fast. But they have a blind spot: security. This guide covers the 20 most common security mistakes in AI-generated code, and how to fix every one.
No rate limiting on API endpoints
Why AI produces this
AI doesn't think about abuse. It builds the happy path: valid user, valid request. It never considers what happens when someone sends 10,000 requests per second.
Consequence
Attackers can brute-force passwords, enumerate users, scrape your entire database, or run up massive cloud bills.
How to fix
// Express rate limiting with express-rate-limit
const rateLimit = require('express-rate-limit');
app.use('/api/', rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP
}));Tokens stored in localStorage
Why AI produces this
AI stores auth tokens in localStorage because it's the simplest approach. The code 'works', but localStorage is accessible to any JavaScript on the page, including XSS payloads.
Consequence
A single XSS vulnerability lets attackers steal every user's auth token.
How to fix
// Use httpOnly cookies instead
res.cookie('session', token, {
httpOnly: true,
secure: true,
sameSite: 'strict',
maxAge: 30 * 60 * 1000
});API keys in frontend JavaScript bundles
Why AI produces this
AI puts API keys directly in fetch calls. It doesn't distinguish between public and secret keys. Your Stripe secret key, database URL, or OpenAI key ends up in the browser.
Consequence
Anyone can open DevTools, find your keys, and use them. Your Stripe key means free charges. Your OpenAI key means your bill explodes.
How to fix
// Create a server-side API route
// /api/generate.ts (server-side)
export async function POST(req) {
const res = await fetch('https://api.openai.com/...', {
headers: { Authorization: `Bearer ${process.env.OPENAI_KEY}` }
});
return Response.json(await res.json());
}Missing CORS configuration
Why AI produces this
AI often doesn't configure CORS at all, or sets it to wildcard (*) to avoid errors during development. This configuration makes it to production.
Consequence
Any external domain can make authenticated requests to your API on behalf of your users.
How to fix
// Restrict to your actual domains
app.use(cors({
origin: ['https://yourapp.com'],
credentials: true,
}));No CSRF protection
Why AI produces this
AI doesn't add CSRF tokens because they add complexity and aren't needed for the basic demo to work.
Consequence
Attackers can trick logged-in users into making unwanted requests: transferring funds, changing settings, deleting data.
How to fix
// Use SameSite cookies + CSRF tokens
res.cookie('session', token, { sameSite: 'strict' });
// Plus: generate and validate CSRF tokens on state-changing requestsSQL injection through string concatenation
Why AI produces this
AI sometimes builds SQL queries with string interpolation instead of parameterized queries, especially in quick prototypes.
Consequence
Attackers can read, modify, or delete your entire database. Full data breach.
How to fix
// NEVER: db.query(`SELECT * FROM users WHERE id = ${userId}`)
// ALWAYS use parameterized queries:
db.query('SELECT * FROM users WHERE id = $1', [userId]);Missing security headers
Why AI produces this
AI never adds security headers because they don't affect functionality. The app works fine without CSP, HSTS, or X-Frame-Options.
Consequence
XSS attacks succeed without CSP. Man-in-the-middle attacks work without HSTS. Clickjacking works without X-Frame-Options.
How to fix
// next.config.js security headers
headers: [
{ key: 'Strict-Transport-Security', value: 'max-age=31536000; includeSubDomains' },
{ key: 'Content-Security-Policy', value: "default-src 'self'" },
{ key: 'X-Frame-Options', value: 'DENY' },
]Passwords hashed with MD5 or SHA-256
Why AI produces this
AI sometimes uses crypto.createHash('sha256') for passwords because it's in the Node.js standard library. Fast hashes are terrible for passwords.
Consequence
Attackers can crack millions of SHA-256 hashed passwords per second with a modern GPU.
How to fix
// Use bcrypt
const bcrypt = require('bcryptjs');
const hash = await bcrypt.hash(password, 12);
const match = await bcrypt.compare(input, hash);No input validation
Why AI produces this
AI trusts that the frontend will send correctly formatted data. It doesn't validate types, lengths, or formats on the server.
Consequence
Type confusion, buffer overflows, injection attacks, and application crashes from unexpected input.
How to fix
// Use Zod for TypeScript validation
import { z } from 'zod';
const UserSchema = z.object({
email: z.string().email(),
name: z.string().min(1).max(100),
age: z.number().int().positive(),
});
const validated = UserSchema.parse(req.body);Session doesn't expire
Why AI produces this
AI creates tokens that never expire, or sets expiry to years. It prioritizes user convenience over security.
Consequence
Stolen tokens work forever. Shared computers leave sessions open. Former employees retain access.
How to fix
// Short-lived access tokens + refresh token rotation
const accessToken = jwt.sign(payload, secret, { expiresIn: '15m' });
// Use refresh tokens in httpOnly cookies for renewalVerbose error messages in production
Why AI produces this
AI includes detailed error messages and stack traces to help with debugging. These make it to production.
Consequence
Stack traces reveal file paths, library versions, and database structure. Attackers use this to plan targeted attacks.
How to fix
// Production error handler
app.use((err, req, res, next) => {
console.error(err); // Log internally
res.status(500).json({ error: 'Something went wrong' }); // Generic to user
});No account lockout
Why AI produces this
AI implements login without considering brute force. There's no limit on failed attempts.
Consequence
Attackers can try millions of password combinations against your login endpoint.
How to fix
// Track failed attempts and lock accounts
if (failedAttempts >= 5) {
lockAccount(userId, 15 * 60 * 1000); // 15 min
return res.status(429).json({ error: 'Too many attempts' });
}File uploads without validation
Why AI produces this
AI implements file upload with a simple multer setup with no type checking, no size limits, no sanitization.
Consequence
Attackers upload executable files, malware, or files that exploit image processing libraries.
How to fix
// Validate file type, size, and content
const upload = multer({
limits: { fileSize: 5 * 1024 * 1024 }, // 5MB
fileFilter: (req, file, cb) => {
const allowed = ['image/jpeg', 'image/png', 'image/webp'];
cb(null, allowed.includes(file.mimetype));
},
});Secrets committed to git history
Why AI produces this
During development, AI puts connection strings and keys directly in code. Even if removed later, they remain in git history.
Consequence
Bots scan public repos and find secrets within minutes. Your keys are compromised before you notice.
How to fix
# Add to .gitignore BEFORE first commit
.env*
*.pem
*.key
# If already committed: rotate ALL secrets immediatelyNo HTTPS enforcement
Why AI produces this
AI develops on localhost (HTTP) and doesn't add HTTPS redirect or HSTS headers for production.
Consequence
Credentials sent over HTTP are visible to anyone on the network. Coffee shop WiFi becomes an attack vector.
How to fix
// Force HTTPS redirect
if (req.headers['x-forwarded-proto'] !== 'https') {
return res.redirect(301, `https://${req.hostname}${req.url}`);
}Database publicly accessible
Why AI produces this
AI uses default database configs where the DB accepts connections from anywhere (0.0.0.0). Convenient for dev, catastrophic for prod.
Consequence
Attackers scan the internet for open databases. Ransomware bots encrypt your data and demand Bitcoin.
How to fix
# PostgreSQL: restrict to app servers only
# pg_hba.conf
host all all 10.0.0.0/8 md5 # Only private network
# Deny all other connectionsMissing logout functionality
Why AI produces this
AI sometimes implements login but forgets logout, or only clears the client-side token without invalidating the server session.
Consequence
Users can't properly log out. Stolen tokens remain valid. Shared devices leave sessions open.
How to fix
// Server-side session invalidation
app.post('/api/logout', (req, res) => {
// Add token to deny list
denyList.add(req.token, req.tokenExpiry);
res.clearCookie('session');
res.json({ success: true });
});Source maps enabled in production
Why AI produces this
Build tools generate source maps by default. AI doesn't disable them for production builds.
Consequence
Anyone can view your original source code, making it trivial to find vulnerabilities and business logic flaws.
How to fix
// next.config.js
module.exports = {
productionBrowserSourceMaps: false,
};Admin routes not properly protected
Why AI produces this
AI creates admin pages and relies only on frontend route guards. No server-side authorization check.
Consequence
Attackers bypass the frontend and call admin API endpoints directly. Full admin access without authorization.
How to fix
// Server-side middleware for admin routes
function requireAdmin(req, res, next) {
if (req.user?.role !== 'admin') {
return res.status(403).json({ error: 'Forbidden' });
}
next();
}
app.use('/api/admin', requireAdmin);No monitoring or alerting
Why AI produces this
AI builds features, not observability. There's no logging of security events, no alerting on suspicious activity, no audit trail.
Consequence
You don't know you've been breached until your users tell you, or worse, until it's on the news.
How to fix
// Log security events
logger.warn('Failed login attempt', {
email, ip: req.ip, userAgent: req.headers['user-agent'],
timestamp: new Date().toISOString(),
});
// Alert on: 10+ failed logins/hour, new admin creation, mass data exportCybrove's App Security Scanner checks for all 20 of these automatically.
Try Cybrove