We scanned 96 Supabase apps.
2.37 million records were exposed.
We scanned 96 startups with a Supabase backend listed on a popular startup directory. Each app had revenue. 42% had sensitive user data readable by anyone with the anon key: profiles, emails, payments. Here's what we found.
How we tested
Discovery
LaunchGuard loaded each app in a headless browser, extracted the Supabase project URL and anon key from the JavaScript bundle.
Enumeration
Used the anon key to query every table, RPC function, and storage bucket discoverable through the Supabase client library.
Assessment
For each accessible resource, counted exposed rows and identified sensitive tables by name: users, profiles, emails, payments. No user data was read.
All scans used only the publicly available anon key which is available on everyone visiting the website. No authentication tokens, no brute force, no exploits. Just the Supabase client library with the credentials the app itself provides. Only the row counts and table names was read, no user data was transferred.
The #1 problem: no RLS at all
69 of 96 apps had tables with no Row Level Security enabled, or misconfigured as public. The Supabase anon key gave full read access to every row. The remaining vulnerable apps had RLS enabled but policies scoped to authenticated instead of auth.uid().
CREATE POLICY "users_read"
ON public.profiles FOR SELECT
TO authenticated
USING (true);
-- Any logged-in user
-- reads ALL profilesWhat we found
Tables left with default public access. Anyone with the anon key reads everything.
56 apps had database functions callable by anonymous users. 266 exposed RPCs total across all apps.
28 of the 40 sensitive apps had tables named users, profiles, emails, or payments in their top exposed tables.
Real apps. Real exposure.
Names are anonymized. The vulnerability patterns are not.
Click each case to reveal the full story.
Got hacked weeks before we scanned. Attacker charged 175 customers 500€ each through Stripe. Over 2,000€ in fees. The founder's LinkedIn post about the breach got 200k views.
After the hack, the founder enabled RLS. But the policies used the authenticated role instead of scoping to auth.uid(). This means any logged-in user could still read all 1,831 profiles, including emails, balances, and payment settings. LaunchGuard discovered this vulnerability which penetration testers overlooked.
Enabling RLS is step one. Testing it from outside is step two.
13 tables with no RLS. 112,457 user records readable by anyone. In addition, 151,044 query logs containing the questions users had asked the chatbot.
Query logs are user data. Every question a user types into your AI feature is stored and if the table has no RLS, it's public.
33 tables with no RLS enabled. The largest tables were analytics data, but the exposed polymarket wallet tables contained user position and transaction data that were readable by anyone with the Supabase URL.
Real revenue, real users, real exposure. This app makes nearly 2k€/month with no row-level security on any table.
The numbers
Highest MRR with exposed data
21 tables & 8,964 rows including user emails and session data.
Most rows exposed (single app)
20€/month MRR. 745,990 transcript segments + 373 user records, all public.
Most user records exposed
A users table with 112,457 rows, readable by anyone with the anon key.
Most tables exposed (single app)
One app had 116 tables with no RLS. 48,570 rows across all of them.
Why AI-generated code makes this worse
AI coding tools generate RLS policies that look correct. The SQL is valid. The policy names make sense. But nobody tests them against the live database from the outside.
The developer asks Cursor or Lovable to “add security.” The tool writes policies. The developer sees green checkmarks. But the policies use TO authenticated USING (true) instead of scoping to the actual user. The code looks secure. The database is wide open.
This is the verification gap: the difference between what your code claims to do and what your deployed app actually does.
“The problem is the checking and retesting.”
Internal tests check your code. External tests check your app. When an LLM writes your security policies, external testing is the only way to know if they actually work.
Is your Supabase app exposed?
The free scan takes under 40 seconds. It checks every table, RPC, and storage bucket reachable with your anon key, similarly as we found the majority of the issues in this study.
Scan your appNo account required. Results in under a minute.