Postgres Row Level Security: The Feature That Will Save You, If You Use It Right
A Note on Expertise
I'm not writing as an "expert" or claiming to have all the answers. I'm a builder sharing my journey on what worked, what didn't, and what I learned along the way. The tech landscape changes constantly, and with AI tools now available, the traditional notion of "expertise" is evolving. Take what resonates, verify what matters to you, and forge your own path. This is simply my experience, offered in the hope it helps fellow builders.
A few months into building Havnwright, I wrote a post about the centralised authentication pattern that every serious web app eventually adopts. Three layers of defence: a single auth provider at the application layer, query-level enforcement at the data access layer, and Row Level Security at the database layer.
That post got a lot of attention. The most common question readers sent me was some version of: "Okay, application-layer I understand. Query helpers I understand. But RLS feels like a black box. When do I use it? How do I use it without tanking performance? Is it even worth it for a solo project?"
Those are fair questions. RLS is one of the most powerful security features Postgres offers, and it is also one of the most misunderstood. I have spent enough time in the trenches with it now that I can answer those questions with real numbers and real scars.
This post is long. If you are running a Supabase project, or any Postgres-backed product that handles user data, it is worth reading.
What RLS actually is
Row Level Security is a Postgres feature that has been around since 9.5. It lets you attach per-row visibility and modification rules directly to a table. When a query comes in, Postgres rewrites it to inject your rules as additional filters before the query runs. The end result is simple: even if your application code has a bug, the database will refuse to return rows the current user should not see.
This is a database-level mechanism, not a library or middleware. That matters. If your auth layer has a bug, if your middleware misroutes, if a developer forgets to add .eq('user_id', userId) to one query out of five hundred, RLS still holds the line.
Enabling it is two steps, and the second one catches a lot of people:
ALTER TABLE tasks ENABLE ROW LEVEL SECURITY;
Until you write a policy, RLS-enabled with no policy means the table is inaccessible to everyone except the superuser and the table owner. This is the "default deny" property and it is load-bearing for safety. Equally important, until you run ENABLE ROW LEVEL SECURITY, any policies you create are inert. I have seen production databases with beautiful policies that were never actually enforced because someone missed the first line.
Why not just filter in the application
This is the question I get most often. If you are already filtering every query by user_id in your application code, why do you need another layer?
The honest answer is that every application-layer filter is a promise you make to yourself. RLS turns that promise into a database constraint. The difference matters the first time one of the following happens:
- A new developer joins and forgets the filter on one query.
- You refactor the data access layer and a filter accidentally gets moved or dropped.
- A future endpoint bypasses the query helper because it was "simpler."
- An internal admin tool accidentally runs user queries with elevated privileges.
- A migration script does something weird and touches user data directly.
Application-layer filtering is fine until it is not. The day it breaks, you do not get a second chance. RLS is the "and also" that catches the slip.
I am not saying RLS replaces application-layer filters. The right answer is both. Filter explicitly in application queries so the planner has a clear path. Enforce RLS at the database so the filter cannot be forgotten.
The Supabase implementation
On vanilla Postgres, RLS policies usually reference current_user or a session variable. On Supabase, things are a bit more elegant.
Supabase uses PostgREST as the layer between your client and the database. When a request comes in, PostgREST verifies the JWT, extracts the claims, and calls SET LOCAL on the Postgres connection to inject both the user's role and their JWT claims as session variables. Then it runs the query.
Two functions make this accessible inside policies:
auth.uid()returns the current user's UUID (thesubclaim of the JWT), or NULL if unauthenticated.auth.jwt()returns the whole JWT claims object as JSON.
Both of these read from current_setting('request.jwt.claims', true), which PostgREST sets per request. No disk I/O. No round trip.
There is one critical gotcha on auth.jwt() that is easy to miss. The JWT has two metadata fields:
raw_app_meta_datais set server-side and users cannot modify it. Safe for authorisation.raw_user_meta_datais user-writable viasupabase.auth.updateUser(). Never use this for authorisation. If you checkauth.jwt() -> 'user_metadata' -> 'role' = 'admin', a user can grant themselves admin by calling the update endpoint.
anon, authenticated, service_role
Supabase ships three roles that matter for RLS:
- anon: requests made with the anonymous key. RLS applies. Policies can specify
TO anonto allow public access. - authenticated: logged-in users. Their JWT carries their UUID. RLS applies.
auth.uid()works. Most of your policies will target this role. - service_role: the master key. Has
BYPASSRLS. All policies are ignored.
The service role is the most important one to understand, because it is the source of the most common Supabase security incident I see.
The service_role key must never reach a browser. It is a master key. It belongs in server-side code, edge functions, or trusted workers. Exposing it even once is a full database compromise.
The subtler trap is what happens when you use the service role server-side for user operations. It is tempting. When you are in a Next.js API route and authentication is awkward to pass through, reaching for supabaseAdmin feels like the pragmatic move.
Do not do it. The moment you run a user-scoped query with the service role, RLS is silently bypassed. If that handler is ever reached with the wrong user context, a session mismatch, an impersonation bug, you have just leaked data across tenants. Use a user-scoped server client that passes the user's JWT.
The patterns that actually work
Here are the policies I use most often. These are all in production on Havnwright.
A user owns their rows:
CREATE POLICY "owners read own" ON tasks FOR SELECT TO authenticated USING ((select auth.uid()) = user_id); CREATE POLICY "owners insert own" ON tasks FOR INSERT TO authenticated WITH CHECK ((select auth.uid()) = user_id); CREATE POLICY "owners update own" ON tasks FOR UPDATE TO authenticated USING ((select auth.uid()) = user_id) WITH CHECK ((select auth.uid()) = user_id); CREATE POLICY "owners delete own" ON tasks FOR DELETE TO authenticated USING ((select auth.uid()) = user_id);
The (select auth.uid()) wrap is not cosmetic. We will come back to it in the performance section.
Team membership:
CREATE POLICY "team members read projects" ON projects FOR SELECT TO authenticated USING (team_id IN ( SELECT team_id FROM team_members WHERE user_id = (select auth.uid()) ));
Note the join direction. The subquery returns "team IDs this user belongs to," typically a small set, and the planner then filters projects by team_id IN (...). The reversed form, team_members.user_id = auth.uid() AND team_members.team_id = projects.team_id, can force the planner to walk every project row. Same result, very different performance.
Public read, owner write:
CREATE POLICY "public read" ON posts FOR SELECT TO anon, authenticated USING (true); CREATE POLICY "authors write" ON posts FOR INSERT TO authenticated WITH CHECK ((select auth.uid()) = author_id); CREATE POLICY "authors update" ON posts FOR UPDATE TO authenticated USING ((select auth.uid()) = author_id) WITH CHECK ((select auth.uid()) = author_id);
USING versus WITH CHECK (this is where people slip up)
This distinction is in the docs but it is not obvious until you have been burned.
- USING is the visibility predicate. Rows that fail it are silently filtered out. Used on SELECT, UPDATE, DELETE.
- WITH CHECK is the write predicate. Rows that fail it cause the operation to error out. Used on INSERT and UPDATE.
USING silently filters. WITH CHECK loudly rejects.
The Postgres docs specify that if you write an UPDATE policy with only USING, WITH CHECK implicitly mirrors it. This is a safe default, but it is also why explicit is better. If you ever edit only the USING clause later, forgetting that the implicit WITH CHECK will drift, you can create an asymmetric hole. A user might be able to update their own row and change user_id to someone else's, effectively transferring data ownership without permission.
My rule: always write WITH CHECK explicitly on UPDATE policies, even when it is the same expression as USING. Future edits will not silently break.
One other thing worth naming. When you do INSERT ... RETURNING or UPDATE ... RETURNING (which the Supabase client does by default), you need a SELECT policy that exposes the row as well. A write-only policy set is not enough. The UPDATE succeeds, but RETURNING fails silently and you get nothing back.
The performance trap
This is the section I wish I had read when I started.
Policy expressions are evaluated per row. If your policy calls a function that Postgres cannot prove is stable and leak-proof, it gets called N times for N rows. That is not a problem for a hundred rows. It is catastrophic for a hundred thousand.
The canonical fix on Supabase is to wrap auth.uid() in a (select ...) subquery:
-- Evaluates auth.uid() once per row USING (auth.uid() = user_id) -- Evaluates auth.uid() once per statement, caches the result USING ((select auth.uid()) = user_id)
The second form promotes the function call to an InitPlan node. Postgres runs it once, caches the scalar, then reuses it for every row check. The result is dramatic.
Supabase published benchmarks on a 100,000-row test table:
| Change | Before | After | Improvement |
|--------|-------:|------:|-------------|
| Indexing the user_id column | 171 ms | <0.1 ms | 1,710x |
| Wrapping auth.uid() in SELECT | 179 ms | 9 ms | ~20x |
| Wrapping a join-based role check | 11,000 ms | 7 ms | 1,571x |
| Extracting to a SECURITY DEFINER helper | 178,000 ms | 12 ms | 14,833x |
That last row is not a typo. 178 seconds down to 12 milliseconds. The right policy architecture is the difference between a query that is unusable and one that is instant.
The second non-negotiable: index the columns your policies reference. If your policy filters by user_id, put a btree index on user_id. If it filters by team_id through a join, index both team_members.user_id and team_members.team_id. RLS can use indexes like any other predicate, but only if the policy expression is in a form the planner can push down.
And the third: always include the same filter in your client query too. supabase.from('tasks').select('*').eq('user_id', userId). Not because RLS is not enough, but because it gives the planner a hint and you benefit from both filter paths. Belt and braces.
The pitfalls I learned the hard way
The service_role trap
Already covered, but worth repeating. If you reach for the service role in a user-facing code path because it is "simpler," you are opting out of RLS. The moment a session gets confused, you leak data. Use the user's JWT server-side.
Missing WITH CHECK
The asymmetric policy drift problem. Write WITH CHECK explicitly on UPDATE, always.
INSERT policies need WITH CHECK, not USING
This one burned me. I wrote an INSERT policy with USING instead of WITH CHECK. The policy was ignored, the INSERT succeeded, and rows landed in the table without the check running. Postgres happily accepts the USING clause on an INSERT policy, it just does nothing with it.
-- WRONG: USING on INSERT does nothing CREATE POLICY x ON tasks FOR INSERT TO authenticated USING (user_id = (select auth.uid())); -- RIGHT CREATE POLICY x ON tasks FOR INSERT TO authenticated WITH CHECK (user_id = (select auth.uid()));
Recursive RLS
Classic setup: teams has a policy that checks team_members, team_members has a policy that checks teams. Query hits either table, Postgres tries to evaluate both policies, and you get "infinite recursion detected in policy."
Fix is a SECURITY DEFINER function that bypasses RLS for the lookup:
CREATE OR REPLACE FUNCTION is_team_member(tid uuid) RETURNS boolean LANGUAGE sql STABLE SECURITY DEFINER SET search_path = '' AS $$ SELECT EXISTS ( SELECT 1 FROM public.team_members WHERE team_id = tid AND user_id = auth.uid() ); $$; CREATE POLICY "team members read teams" ON teams FOR SELECT TO authenticated USING (is_team_member(id));
Two non-negotiables when you do this. Set search_path = '' explicitly, because SECURITY DEFINER functions without pinned search_path are a known attack vector. And keep the function body as narrow as you can.
Storage buckets are a separate story
Supabase Storage runs its own RLS on the storage.objects table. Your table-level policies do not cover uploads and downloads. Private buckets do not automatically grant access; you still need a policy. This catches a lot of people because it is a separate mental model from the rest of the database.
A per-user folder pattern:
CREATE POLICY "users upload to own folder" ON storage.objects FOR INSERT TO authenticated WITH CHECK ( bucket_id = 'user-uploads' AND (storage.foldername(name))[1] = (select auth.uid())::text );
"Just disable RLS for this query"
Gateway drug. Once the team starts doing it to unblock migrations or reports, eventually a production path runs on a connection that forgot to re-enable. Do not do it. Use a tightly scoped SECURITY DEFINER function, or a dedicated BYPASSRLS role for identified admin paths.
When RLS is not the answer
RLS is not a silver bullet, and there are cases where it gets in the way more than it helps.
Admin and back-office dashboards that cross tenants. Customer support, internal BI, revenue reports. If your admins legitimately need to see across every user's data, RLS fights you. Options: run those queries through a BYPASSRLS role on a separate service, or handle the admin filter at the application layer with strict role checks.
High-throughput OLTP where RLS overhead is unacceptable. If benchmarks show RLS is adding real latency even after the initplan trick and proper indexing, consider application-level filtering with a trusted service. This is rare but real.
Reporting and analytics workloads. Aggregations over many rows suffer disproportionately under RLS because the policy runs per contributing row. Materialised views do not inherit RLS either, they hold a snapshot captured at refresh time. Use a separate reporting role or replicate to a warehouse.
Cross-system authorisation. If you need the same authorisation decisions in Postgres, a search index, S3 objects, and an external API, RLS only covers Postgres. A policy engine like OPA, Cerbos, or Oso unifies the decisions. RLS becomes one enforcement point, not the whole story.
The alternatives, briefly
I get asked about alternatives enough that it is worth listing them honestly.
Application-layer filtering. Every query carries WHERE user_id = :current_user. Transparent, debuggable, no planner surprises. Downside: no defence in depth. One missed filter leaks data.
Middleware-based authorisation. Gateway filters, Next.js middleware, PostgREST pre-request hooks. Centralised, testable, but anything that bypasses the middleware (direct DB, background jobs, cron) is unprotected.
ORM-level scoping. Prisma extensions, Drizzle's first-class RLS helpers, custom middleware that appends tenant filters. Type-safe and hard to forget if wired into a base repository. Raw SQL escapes it.
GraphQL directives. Hasura permissions compile to RLS-like predicates. Pothos/Apollo directives enforce at resolver level. Good for GraphQL-only products. Non-GraphQL consumers (cron, workers) still need something else.
Policy engines. OPA, Cerbos, Oso. Decouple authorisation logic from code and SQL. Write policies in Rego or YAML, evaluate them at the application layer. Powerful for complex rules across systems, but you still need database-level enforcement underneath as the last line.
My opinion: on Supabase, RLS is the right primary choice for user-data isolation, paired with explicit application-layer filters. If your authorisation gets complex, add a policy engine on top. Keep RLS as the floor.
What I would tell another founder
Enable RLS on every user-data table before you write a single query. Default deny is your friend. You can always add policies. You cannot always find the missed filter.
Always wrap auth.uid() in (select ...). This one line of syntax is the difference between usable and unusable performance.
Index the columns your policies reference. Every one of them. Treat it as part of the table's creation, not an optimisation for later.
Write WITH CHECK explicitly on UPDATE, always. Implicit mirroring is fine until someone edits the USING clause and forgets.
Never use the service role for user-scoped operations. Pass the user's JWT, even when it is a little more code.
Test your policies. pgTAP exists. Use it. For SELECT/UPDATE/DELETE, negative tests must assert empty results, not thrown errors, because RLS denials are silent. For INSERT, you can assert thrown errors.
Treat RLS as your last line of defence, not your only one. Filter explicitly in application code. Check in middleware where it makes sense. Let RLS catch what the layers above miss.
Know when to step outside it. Admin paths, reporting queries, cross-system decisions. Use a BYPASSRLS role or a SECURITY DEFINER function for those, deliberately and narrowly.
This is one of those features where the difference between "I enabled it" and "I use it well" is about a 1,000x performance gap and a handful of silent security holes. Worth the time to get right.
This is part of a series about building products as a solo founder. Earlier posts cover the centralised authentication pattern that this post extends, why I built Havnwright, what building with AI actually looks like, and every rule is a bug report. More coming.
About the Author
Alireza Elahi is a solo founder building products that solve real problems. Currently working on Havnwright, Publishora, and the Founder Knowledge Graph.