← migration

supabase → briven

port a supabase project onto briven. follow the ten-step playbook on /migration — this page documents the supabase-specific parts (RLS, edge functions, auth, storage).

good news: supabase is already postgres. the schema port is mostly copy-paste; only RLS + edge functions + auth need real work.

schema port — postgres → briven dsl

dump your public schema and translate the CREATE TABLE statements one-to-one into the briven dsl. the column types map directly:

  • text / varchar(n) / integer / bigint / boolean / timestamptz / uuid / jsonb all have direct briven dsl equivalents (see /schema).
  • SERIAL / BIGSERIAL → use a ulid text().primaryKey() with newId('...') from @briven/sharedrather than a sequence; this is the briven idiom and avoids the "IDs visible to attackers" class of bugs.
  • postgres enums (CREATE TYPE ...) → text()with application-level validation. the briven dsl doesn't have a first-class enum yet; the validation pattern stays in your function code.

row-level-security policies do NOT carry over

briven enforces tenancy in function code, not via postgres RLS. this is a deliberate trade-off — RLS is brittle when you have to reason about which role a query is running as, and the connection-pool model briven uses runs every query as the project's own role rather than the end-user's.

port each CREATE POLICY to a guard inside your function:

-- supabase
CREATE POLICY "users see own notes"
ON notes FOR SELECT
USING (auth.uid() = author_id);

// briven/functions/getNotes.ts
import { query, type Ctx } from '@briven/cli/server';
export default query(async (ctx: Ctx) => {
  if (!ctx.auth) throw new Error('unauthorized');
  return await ctx.db('notes')
    .select()
    .where({ authorId: ctx.auth.userId });
});

this is more code per query but easier to reason about, easier to log, and easier to test. side benefit: no policy-recompile pause on a schema change.

edge functions port

supabase edge functions are deno scripts; briven functions are also deno isolates (see /functions). the wire format is different (briven functions are invoked via POST /v1/projects/:id/functions/:name, not over a custom edge runtime), but the handler shape ports cleanly:

// supabase: supabase/functions/sendInvite/index.ts
serve(async (req) => {
  const { email, role } = await req.json();
  // ... call mittera, write to db ...
  return new Response(JSON.stringify({ ok: true }));
});

// briven: briven/functions/sendInvite.ts
import { mutation, type Ctx } from '@briven/cli/server';
import { z } from 'zod';

const Args = z.object({ email: z.string().email(), role: z.string() });

export default mutation(async (ctx: Ctx, raw: unknown) => {
  const { email, role } = Args.parse(raw);
  // ... call mittera (signing secret in ctx.env), write to db ...
  return { ok: true };
});

auth port

supabase auth → Better Auth. briven supports magic-link + email/password + GitHub OAuth out of the box. supabase's auth.users table doesn't exist on briven — there's a single users table with email + name + verifiedAt.

to preserve user IDs across the cut, set briven's users.idto supabase's auth.users.id(it's a uuid; briven's text primary key accepts it directly) during the data-import step.

storage port

supabase storage → MinIO (briven.tech) or any S3-compatible bucket (self-host). the path layout briven uses is p_<projectId>/<userPath>— your existing bucket can be cp'd wholesale into the new namespace. briven storage as a CLI command lands with the public beta; until then use the AWS or rclone CLIs against the briven minio endpoint.

data dump → briven

supabase exposes a postgres connection on the dashboard. dump the public schema, restore into the briven project's schema:

# dump
pg_dump --schema=public --no-owner --no-privileges \
  --format=custom --file=supabase.dump \
  "$SUPABASE_DATABASE_URL"

# restore — connect with the dsn from `briven db shell-token`
pg_restore --no-owner --no-privileges \
  --schema=public --dbname="$BRIVEN_PROJECT_DSN" \
  supabase.dump

# briven's data plane creates a per-project schema (proj_<id>); the
# above restore writes into "public" inside that schema. adjust
# search_path if your queries assume bare table names.