Research2026-03-11LaunchGuard Security Research

We scanned 96 Supabase apps.
2.37 million records were exposed.

We scanned 96 startups with a Supabase backend listed on a popular startup directory. Each app had revenue. 42% had sensitive user data readable by anyone with the anon key: profiles, emails, payments. Here's what we found.

0
Apps scanned
0
With sensitive data
0
Total rows exposed
0
Exposure rate
Methodology

How we tested

01

Discovery

LaunchGuard loaded each app in a headless browser, extracted the Supabase project URL and anon key from the JavaScript bundle.

02

Enumeration

Used the anon key to query every table, RPC function, and storage bucket discoverable through the Supabase client library.

03

Assessment

For each accessible resource, counted exposed rows and identified sensitive tables by name: users, profiles, emails, payments. No user data was read.

All scans used only the publicly available anon key which is available on everyone visiting the website. No authentication tokens, no brute force, no exploits. Just the Supabase client library with the credentials the app itself provides. Only the row counts and table names was read, no user data was transferred.

Root Causes

The #1 problem: no RLS at all

69 of 96 apps had tables with no Row Level Security enabled, or misconfigured as public. The Supabase anon key gave full read access to every row. The remaining vulnerable apps had RLS enabled but policies scoped to authenticated instead of auth.uid().

Broken — allows any logged-in user to read all rows
CREATE POLICY "users_read"
ON public.profiles FOR SELECT
TO authenticated
USING (true);

-- Any logged-in user
-- reads ALL profiles

What we found

72%
No RLS enabled

Tables left with default public access. Anyone with the anon key reads everything.

58%
Exposed RPCs

56 apps had database functions callable by anonymous users. 266 exposed RPCs total across all apps.

70%
Sensitive user data in exposed tables

28 of the 40 sensitive apps had tables named users, profiles, emails, or payments in their top exposed tables.

Case Studies

Real apps. Real exposure.

Names are anonymized. The vulnerability patterns are not.

Click each case to reveal the full story.

Tables exposed
translationsprofilesconversationsservicesuser_plan_status
What happened

Got hacked weeks before we scanned. Attacker charged 175 customers 500€ each through Stripe. Over 2,000€ in fees. The founder's LinkedIn post about the breach got 200k views.

After the hack, the founder enabled RLS. But the policies used the authenticated role instead of scoping to auth.uid(). This means any logged-in user could still read all 1,831 profiles, including emails, balances, and payment settings. LaunchGuard discovered this vulnerability which penetration testers overlooked.

Enabling RLS is step one. Testing it from outside is step two.

Tables exposed
queriessourcesusersimage_text_details
What happened

13 tables with no RLS. 112,457 user records readable by anyone. In addition, 151,044 query logs containing the questions users had asked the chatbot.

Query logs are user data. Every question a user types into your AI feature is stored and if the table has no RLS, it's public.

Tables exposed
team_trendsteam_statspolymarket_wallet_resultspolymarket_positions
What happened

33 tables with no RLS enabled. The largest tables were analytics data, but the exposed polymarket wallet tables contained user position and transaction data that were readable by anyone with the Supabase URL.

Real revenue, real users, real exposure. This app makes nearly 2k€/month with no row-level security on any table.

Scale

The numbers

12.8K€/mo

Highest MRR with exposed data

21 tables & 8,964 rows including user emails and session data.

752K

Most rows exposed (single app)

20€/month MRR. 745,990 transcript segments + 373 user records, all public.

112K

Most user records exposed

A users table with 112,457 rows, readable by anyone with the anon key.

116

Most tables exposed (single app)

One app had 116 tables with no RLS. 48,570 rows across all of them.

The Verification Gap

Why AI-generated code makes this worse

AI coding tools generate RLS policies that look correct. The SQL is valid. The policy names make sense. But nobody tests them against the live database from the outside.

The developer asks Cursor or Lovable to “add security.” The tool writes policies. The developer sees green checkmarks. But the policies use TO authenticated USING (true) instead of scoping to the actual user. The code looks secure. The database is wide open.

This is the verification gap: the difference between what your code claims to do and what your deployed app actually does.

“The problem is the checking and retesting.”

— Founder of a breached Supabase app, after losing over 2,000€ to a single exploit

Internal tests check your code. External tests check your app. When an LLM writes your security policies, external testing is the only way to know if they actually work.

Is your Supabase app exposed?

The free scan takes under 40 seconds. It checks every table, RPC, and storage bucket reachable with your anon key, similarly as we found the majority of the issues in this study.

Scan your app

No account required. Results in under a minute.