Migrating Off Firestore
Migrating off Firestore is a frontend project, full stop. Firestore is the API: components call onSnapshot directly, mutations write directly, the SDK gives you reactivity for free. That's the meaty part of any Firestore-backed app, and it's exactly what has to be rewritten.
About a year of work for us. Dual writes for most of that year, a feature-flagged read flip, and no customer-visible incident.
What we built
Dual writes meant every write went to both Firestore and Postgres in parallel. Firestore stayed the source of truth for reads. A feature flag let us point reads at the new Postgres-backed API, rolled forward gradually. A new repository layer between the frontend and the database gave us a single place to swap implementations.
The repository layer was the precondition for everything else. You can't dual-write or shadow-read across two backends if the code is calling Firestore from a hundred different files. You first need a single seam. Then you can do interesting things behind it.
The architecture I'd have argued for
The repository layer wasn't my pick. The decision was made before I started in this role; the interfaces had been drawn up, but the implementation hadn't been executed yet. I came into a half-finished architectural choice.
If I'd been the one choosing, I'd have moved the abstraction to the API boundary first. Build a network contract. Have two server-side implementations of it, one Firestore-backed and one Postgres-backed. The frontend calls "the API." Behind the network call, swap implementations when you're ready. The migration becomes a server-side concern, and the client doesn't care.
The reason that path matters is what falls out of it. The API surface becomes something you can sell as a product, not just an internal contract. It gives you the same isolation a repository layer was meant to provide, but at the boundary where the database actually stops mattering. It lets you re-architect frontend data flows around standard request patterns instead of database-shaped ones, which is where the next round of frontend performance work would have come from. A/B testing becomes a server-side flag flip rather than a client-side rebuild. A third backend later, an S3-only deployment, anything, ships without touching the frontend or the CLI.
What we have instead is a TypeScript repository layer that abstracts Firestore in the frontend, while the frontend stays coupled to a magic layer that handles streaming data in, the entity model, every mutation. The shape of the database still leaks into the client. The repository layer is a TypeScript veneer over that leak; the frontend still streams, mutates, and tracks entities the way Firestore taught it to.
I owned the frontend side of the work, and I pushed for the API-first version where I could. The choice changed how much frontend work the migration cost, and how much future flexibility we lost in the bargain. Both shapes can work. The choice should be deliberate, made before any code is written, because retrofitting between them is brutal.
What it actually cost
About a quarter to introduce the repository layer and migrate the existing call sites onto it. The abstraction work was call-by-call. There was no clever way to do it in one pass; you go through every file that talks to Firestore and route it through the seam. That's just labour.
About three more quarters to test it, cover all the Firestore semantics, build the Postgres-backed implementation, run the dual-writes long enough to trust them, and roll the read flag forward.
Where the year went
One quarter for the architecture. Three quarters for the migration. The architectural work was the small part. The long tail was feature parity: every Firestore behaviour, every offline edge, every place a snapshot listener was leaning on a Firestore detail that wasn't documented anywhere except in the code that ran on top of it.
Real-time without WebSockets
Firestore's real-time updates needed a replacement on the new side. The default answer would be WebSockets. We picked Server-Sent Events instead.
WebSockets get blocked routinely in high-network-security enterprise environments. A meaningful share of our customer base lives behind that kind of corporate proxy. SSE is a long-lived HTTP request, which usually passes whatever firewall and proxy layers a customer has bolted on. Polling sat behind it as a fallback for the cases where even SSE didn't survive the journey.
A WebSockets-only build would have shipped a worse product to the customers who pay us the most.
In retrospect
The customer-facing change was nothing. That was the goal. The flag flipped, the data was already in both stores, the real-time updates kept flowing, and the app continued to work. Customers noticing nothing was the bar.
A year of work, a non-trivial rewrite of the data layer, and the only signal that anything happened was that the database the app was talking to was different. Many engineering teams might find that result unsatisfying, but I disagree, I think its a beautiful sign that we nailed it.
Key takeaway
The abstraction, the interface, the separation of concerns, they matter more than the new backend. Where the abstraction lives is the whole game. Get it right and a year of feature parity work happens behind a clean swappable boundary. Get it wrong and you spend the next decade with a half-finished abstraction in your codebase that everyone has to navigate every time they touch the data layer. Most "the migration is mostly done" announcements are misleading; the architecture being done means roughly 25% of the work is done.
I'd advocate for the API-first version every time, knowing it's harder to start. I do think if we'd gone with that, it would have taken an extra quarter or two to deliver. The time spent building a wrapper around firebase and ensuring that wrapper is 100% functional is the same scale as our existing migration, but then we'd also have to migrate that to a new service and face the same teething period. Building a service in front of your old database for a migration you haven't done yet feels like over-investment. That being said, the frontend stops growing database-shaped concerns, the CLI doesn't either when you build one, a future backend swap becomes a server-side change, and the API surface itself becomes a product you can sell in addition to the contract you have to maintain.