migration
moving from convex, supabase, raw postgres, prisma, drizzle, or firebase to briven.
five principles
- 1
read before write
read this entire page once before running any command. migrations that go sideways almost always do so because someone skipped this step.
- 2
parallel-run, don't switch
for at least 48 hours, the old system and briven run side-by-side, on the same data, serving the same traffic. no cutover before the parallel-run window.
- 3
back up twice
two independent backups to two independent destinations before you touch anything. verify both before proceeding.
- 4
schema first, data second, functions third, traffic last
in that order, always. inverting the order leaves windows where something is half-migrated and a write goes to the wrong place.
- 5
one product at a time
never migrate two things in parallel. the cognitive load of one migration is enough.
the ten-step playbook
every migration follows these ten steps in order. specific commands vary per source — per-source detail pages cover those.
- 1
inventory the source
list every table, view, function, trigger, extension, env var, and external service the source project depends on. count rows per table. document the auth model in plain language. write it all into a migration-inventory.md so later steps don't surprise you.
- 2
set up the briven project
install @briven/cli, briven login, briven init, create the project in the dashboard, note the project id and the admin api key. configure the region closest to your users.
- 3
back up the source twice
non-negotiable. two backups, two destinations, both restored to a temp database and row-counted to verify. don't proceed to step 4 until both verify.
- 4
port the schema
translate the source schema into briven/schema.ts using the briven schema dsl. table-by-table — don't try to do it in one pass. preserve foreign-key relationships and indexes.
- 5
port the functions
every server-side function (convex query/mutation, supabase edge function, prisma RPC) becomes a file under briven/functions/. wrap each with query() or mutation() from @briven/cli/server.
- 6
set up env vars
briven env set <key> <value> for every secret the source project uses. encrypted at rest with the platform key. the runtime injects them into ctx.env at cold start.
- 7
copy the data
pg_dump from the source, pg_restore into the briven data plane via the project's dsn (briven db shell-token issues a short-lived dsn). row-count every table — it must match step 1.
- 8
parallel-run for 48 hours
point a fraction of read traffic at briven, keep writes on the source. observe error rates, p50/p99 latency, and any function failures. divergence here is the time to surface migration bugs — not after the cutover.
- 9
cut over writes
flip the dns or the client config. writes now go to briven; the source becomes read-only. keep the source running for at least another 7 days as a rollback target.
- 10
decommission
after 7 days of green metrics, archive the source database to cold storage and tear down the running source. keep the archive for 90 days minimum.
per-source guides
each source has its own path through the ten steps. these expand as the first migration of each kind clears.
convex
documented
union-of-literal fields → text() with app-level validation; v.id() → text().references(); _creationTime → explicit created_at.
supabase
documented
row-level-security policies don't carry over — express them as guards in function code. edge functions port 1:1; storage cps to MinIO.
raw postgres
documented · straightest path
schema.sql → briven/schema.ts via the dsl, pg_dump | pg_restore against the briven dsn, port handlers into briven/functions/.
drizzle
documented
schema.ts ports almost 1:1 (drizzle and briven both target postgres with TS-first schema definitions). swap the imports + adapt the column-builder calls; data carries via pg_dump.
prisma
documented
schema.prisma → briven/schema.ts via the dsl (we map the field decorators to briven helpers); pg_dump | pg_restore for data; PrismaClient calls become ctx.db chains.
firebase / firestore
documented · hardest path
document model → relational model is a manual remap. plan for an extended parallel-run window (2+ weeks) to catch shape mismatches.
mongodb
documented
collection → table with deliberate jsonb vs flatten decisions per embedded doc; ObjectId → text + ulid for new ids; mongoexport → custom transform → COPY for the data move.
hasura
documented
postgres half ports for free; the work is the permissions port — every (role, table, action) triple from hasura metadata becomes a guard in function code.
nextauth / auth.js
documented
schema maps 1:1 (both target Better Auth's shape); provider port is trivial; the work is replacing getServerSession + useSession callsites and choosing preserve-ids vs preserve-sessions cutover.
when not to use this
- moving between briven projects — wait for briven export / import (private beta)
- moving a briven project between regions — file a support ticket
- migrating only data without schema changes — use pg_dump / pg_restore directly