Security ResearchApril 8, 2026 · 8 min read

I Scanned 100 AI-Built SaaS Apps — 34% Had Exposed API Keys

We ran our security scanner against 100 SaaS products built with AI coding tools — Cursor, Claude, ChatGPT, v0. The results were worse than we expected. Stripe secret keys, OpenAI tokens, Supabase service roles — sitting in public JavaScript bundles for anyone to grab.

100

Apps scanned

34%

Had exposed keys

57

Total keys found

23

Were critical severity

Why AI-Generated Code Leaks Secrets

When you ask Cursor to "add Stripe payments" or tell Claude to "connect to Supabase," the AI generates working code fast. But it often puts secret keys directly in client-side components.

The AI doesn't distinguish between server-side and client-side code. It just makes it work. And "working" means the key is in a React component, compiled into a JavaScript bundle, and downloaded by every visitor's browser.

This is not hypothetical

We found live Stripe secret keys (sk_live_) in 11 apps. These keys can create charges, refund payments, and read customer data. We responsibly disclosed all findings to the affected developers.

What We Found — The Breakdown

Here's the breakdown of exposed secrets across all 100 apps:

Secret TypeApps AffectedSeverityRisk
Stripe Secret Key (sk_live_)11CriticalCreate charges, read customer data, issue refunds
OpenAI API Key8HighRun API calls on your account, rack up costs
Supabase Service Role Key7CriticalFull database access, bypass RLS policies
AWS Access Key4CriticalAccess any AWS service on your account
Anthropic/Claude Key3HighRun API calls, consume credits
SendGrid API Key3HighSend emails as your domain, phishing risk
Twilio Auth Token2HighSend SMS, make calls on your account
GitHub Personal Token2HighAccess private repos, push code
Other (Razorpay, Slack, etc.)5Medium-HighVaries by service

The "Vibe Coding" Problem

Vibe coding — using AI to generate entire applications by describing what you want — is exploding. And it's creating a new class of security vulnerabilities that traditional scanners miss.

The pattern is always the same:

  1. Developer prompts AI: "Add Stripe checkout to my Next.js app"
  2. AI generates a component with const stripe = new Stripe('sk_live_xxx')
  3. Developer sees it works, ships it
  4. The secret key is now in the compiled JavaScript bundle
  5. Anyone can open DevTools → Sources → find the key

The fix is simple: move the key to a server-side API route and use environment variables. But AI tools don't do this by default, and developers in "vibe mode" don't check.

How to Check If Your App Is Affected

It takes 30 seconds:

  1. Go to AI Exposure Tool's free security scanner
  2. Enter your URL
  3. The scanner downloads your JavaScript bundles (passive, read-only — nothing touches your server)
  4. It scans against 17 secret patterns (Stripe, OpenAI, AWS, Supabase, etc.)
  5. You get a security grade (A-F) with exactly which keys are exposed and where

Free Security Scanner

19-point check. Detects 17 secret types in JS bundles. Also checks security headers, .env exposure, and source maps.

Scan your site free

How to Fix Exposed Keys

If you find exposed keys, do this immediately:

1

Rotate the key NOW

Go to your provider's dashboard (Stripe, OpenAI, AWS, etc.) and generate a new key. The old one is compromised.

2

Move to environment variables

Put the key in a .env.local file (add .env* to .gitignore). Access it via process.env.STRIPE_SECRET_KEY on the server side only.

3

Use server-side API routes

In Next.js, create /app/api/checkout/route.ts. The Stripe call happens on the server — the key never reaches the browser.

4

Check your git history

If the key was ever committed, it's in your git history even after removal. Use git filter-branch or BFG Repo-Cleaner to purge it.

5

Re-scan to verify

Run the security scanner again to confirm the key is no longer in your JS bundles.

Prevention: How to Stop AI From Leaking Your Keys

Add this to your AI coding prompts:

IMPORTANT: Never put API keys, secret keys, or tokens
directly in client-side code, React components, or any
file that gets compiled into a JavaScript bundle.

Always use:
- Server-side API routes (/app/api/ in Next.js)
- Environment variables (process.env.KEY_NAME)
- .env.local file (added to .gitignore)

For Stripe: use publishable key (pk_) on client,
secret key (sk_) on server only.

Also Check Your AI Visibility

While you're auditing your security, check if AI assistants can actually find and recommend your product. Security and visibility are both trust signals — AI platforms deprioritize sites with security issues.

Free AI Visibility Scan

Check 25+ signals. See if ChatGPT, Perplexity, and Gemini can find and recommend your product.

Run AI visibility scan

FAQ

How do API keys get exposed in JavaScript bundles?

When developers use AI coding tools like Cursor or Claude to add features like payments or authentication, the AI often generates code with hardcoded secret keys in client-side components. These keys end up in the compiled JavaScript bundle that browsers download, making them visible to anyone who inspects the page source.

How can I check if my API keys are exposed?

Use a free security scanner like AI Exposure Tool's security scan. It passively downloads your JavaScript bundles and scans them against 17 known API key patterns (Stripe, OpenAI, AWS, Supabase, etc.) without touching your server. Takes about 30 seconds.

What's the difference between publishable and secret API keys?

Publishable keys (like Stripe's pk_live_) are designed to be in client-side code. Secret keys (like Stripe's sk_live_) must NEVER be in JavaScript bundles. Secret keys can access your account, create charges, read customer data, and cause financial damage.

What should I do if my API key is exposed?

1) Immediately rotate the key in your provider's dashboard. 2) Move the key to server-side environment variables. 3) Use a .env file that's in .gitignore. 4) Use server-side API routes (Next.js /api, Express routes) to make API calls. 5) Re-scan your site to confirm the fix.

Methodology

We selected 100 SaaS products launched in the last 6 months from Product Hunt, Indie Hackers, and Twitter/X build-in-public threads. Selection criteria: built by solo developers or small teams, publicly stated use of AI coding tools (Cursor, Claude, ChatGPT, v0, Bolt).

Each site was scanned using our secret detection engine which downloads JavaScript bundles and matches against 17 regex patterns for known API key formats. All scans were passive and read-only. No credentials were used or tested. Affected developers were notified before publication.

Don't be part of the 34%

Scan your site in 30 seconds. 19 checks, 17 secret patterns, completely free.