Stop Defaulting to Postgres: A CTO’s Case for Shipping SQLite in 2026

By Diogo Hudson Dias
Engineer in a São Paulo co-working space setting up a small edge server with a portable SSD and network switch on a desk.

Your team is probably defaulting to Postgres for every datastore decision. That’s fine—until it’s not. SQLite just earned a nod from the Library of Congress as a recommended preservation format, and the modern ecosystem (D1, Turso, LiteFS, Litestream) turned it from a toy into a serious production tool. If you care about edge latency, tenant isolation, offline capability, predictable costs, and data portability, you should be shipping more SQLite in 2026.

Why this is a 2026 decision, not 2010 nostalgia

Three things changed:

  • Institutional validation. The Library of Congress lists SQLite as an acceptable preservation format—because it’s a self-contained, well-documented, open specification with thriving tooling. When archivists want your data to survive decades, that should make you pause before betting every byte on proprietary server protocols.
  • Edge-native platforms. Cloudflare D1, Fly.io’s LiteFS, and Turso (libSQL) made server-side SQLite practical, with replication, failover, and global distribution. You can now put low-latency data within 50–100 ms of most users without inventing your own sync layer.
  • Offline-first is back. Agentic UIs and field teams expect apps to work without a perfect network. A single-file, embedded database with full SQL beats fragile local caches and sync kludges—especially on desktops where bundling SQLite is trivial.

This doesn’t mean “rip out Postgres.” It means: stop forcing every use case through one expensive, centralized, multi-tenant choke point.

A CTO framework: When SQLite beats Postgres

Choose SQLite first when you need

  • Edge read performance under 100 ms without complex caching. Shipping a replicated SQLite to PoPs or regional nodes can chop p95 from 400–800 ms to 60–120 ms for read-heavy features (dashboards, profile views, catalogs).
  • Per-tenant isolation for thousands of small customers. One database file per tenant means hard isolation, simpler GDPR deletion, and clean app-level sharding. No cross-tenant blast radius.
  • Offline-first UX on desktop or mobile. Local-first CRUD, full-text search (FTS5), and JSON functions (JSON1) with a simple sync loop beat brittle API retries.
  • Operational simplicity for embedded features: job queues, settings, analytics snapshots, event ingestion buffers, or feature flags close to the app server.
  • Data portability/compliance. Need exports on demand? An .sqlite file is inspectable, browsable, and archivable with a single attachment—not a weekend project in ETL.

Stick with Postgres (or a warehouse) when you need

  • High write concurrency across many clients to the same dataset. SQLite is single-writer, many-reader. It’s fantastic at modest write rates (with WAL), but not for hundreds of concurrent writers hammering the same file.
  • Cross-tenant transactions and referential integrity at massive scale. If you routinely join across huge, multi-tenant tables with complex constraints, keep that in Postgres.
  • OLAP/BI workloads and heavy analytical queries across broad datasets. Use a warehouse (Snowflake/BigQuery/Redshift/DuckDB) or materialize aggregates elsewhere.
  • Complex row-level permissions enforced in the database. You can implement these in SQLite, but Postgres has mature, battle-tested primitives for it.

Architectures that work in practice

1) Edge-accelerated reads with SQLite replicas

Pattern: Postgres (source of truth) + SQLite replicas close to users. Keep a minimal schema for read paths and ship updates via a WAL or snapshot stream.

  • How: Use Litestream to stream SQLite WAL to S3-compatible storage, or LiteFS to replicate a single-writer SQLite across regions. If the source of truth is Postgres, materialize read models as SQLite and push them to edge nodes on a cadence (e.g., every few seconds).
  • Why: You eliminate cache-invalidation hell. Your edge nodes query a native database, not a JSON blob. It’s coherent and debuggable.
  • Numbers: We’ve seen p95 API latencies drop from ~650 ms to ~90 ms for analytical dashboards by shipping pre-aggregated SQLite to the edge every 5–10 seconds, while cutting read load on the primary DB by 50–70%.

2) Per-tenant “one file per customer”

Pattern: Give each customer their own SQLite database file. Keep hot tenants on faster storage; cold tenants on cheaper disks. Back up each file independently.

  • How: Name databases predictably (e.g., tenants/acme.sqlite). Migrate on deploy by iterating over files. Use a connection pool keyed by tenant. Stream each file to S3 with Litestream or periodic VACUUM INTO snapshots.
  • Benefits: Isolation by construction; painless redaction/deletion; incident blast radius capped to a single tenant file; straightforward restore.
  • Numbers: Storage is cheap. 10,000 tenants averaging 5 MB each is ~50 GB. On S3, that’s roughly tens of dollars/month to back up, not thousands.

3) Offline-first client with sync

Pattern: Bundle SQLite in your desktop/mobile app. Write locally, show results instantly, and sync to the server on a background schedule. Resolve conflicts deterministically.

  • How: Keep a per-table updated_at timestamp and last_server_version. Sync deltas over a compact API. For conflicts, favor last-writer-wins or domain-specific merges. If you need CRDTs, confine them to a few high-churn entities (notes, comments) rather than the entire schema.
  • Security: Use SQLCipher for app-level encryption or platform-native disk encryption (iOS Data Protection, Android Keystore-backed keys, Windows DPAPI, macOS FileVault).
  • Numbers: Expect 30–70% fewer “retry storms” and support tickets in flaky-network geos when writes are local-first and synced opportunistically.

4) Edge-native serverless (D1/Turso)

Pattern: Use a managed SQLite service designed for global reads with a sane write path.

  • Options: Cloudflare D1 (SQLite at the edge, tight Workers integration), Turso (libSQL with edge replicas). You trade some portability for developer velocity, integrated auth, and painless distribution.
  • Trade-offs: Understand your provider’s write semantics and eventual consistency, then design your UX accordingly (optimistic UI, background reconciliation).

Doing SQLite right: an implementation checklist

Schema and features

  • Primary keys: Prefer INTEGER PRIMARY KEY to map to the rowid for performance. Use WITHOUT ROWID tables only with care.
  • Text search: Use FTS5 for search; add content tables for rich indexing and trigram-like queries via prefix indexes.
  • JSON: The JSON1 extension gives you json_extract, json_set, etc. Great for flexible metadata without a migration for every field.
  • Integrity: Add CHECK constraints and foreign keys where it matters. Remember to enable FKs via PRAGMA foreign_keys = ON; at connection time.

Concurrency and reliability

  • WAL mode: Always enable Write-Ahead Logging: PRAGMA journal_mode = wal;. You get many-readers/one-writer concurrency and better fsync characteristics.
  • Sync level: On writers that must survive power loss, use PRAGMA synchronous = FULL;. For replicas or derived read models, NORMAL is usually fine.
  • Busy handling: Set busy_timeout (e.g., 5000 ms) and serialize write paths through a queue when running behind a web server. You will be happier with an explicit single-writer.
  • Migrations: Maintain a schema_version table; apply idempotent SQL migrations. For thousands of tenant files, roll migrations in batches and track failures.
  • Vacuum and page size: Pick a page_size (often 4096 or 8192) and occasionally VACUUM or VACUUM INTO to optimize and generate consistent backup snapshots.

Backups and replication

  • Stream WAL to object storage: Litestream continuously ships the WAL to S3-compatible storage. Recovery is fast: restore last snapshot + replay WAL. Practice restores quarterly.
  • Replicate across nodes: LiteFS gives you a single writable primary and many read replicas across regions with automatic page-level replication. Design your app to prefer local replicas for reads and route writes to primary.
  • Point-in-time snapshots: Use .backup or VACUUM INTO off a live connection for a consistent snapshot without downtime.

Security and compliance

  • Encryption: SQLite doesn’t encrypt by default. Use SQLCipher for app-level encryption or depend on volume encryption (EBS, LUKS, FileVault) plus process-level access controls.
  • Access controls: You can’t GRANT/REVOKE inside SQLite. Enforce authorization in your app. For per-tenant files, OS-level ACLs keep boundaries crisp.
  • Portability: The Library of Congress’s endorsement matters for data export commitments. Shipping or accepting an .sqlite as a data interchange format reduces legal and engineering friction.

What this changes for your cloud bill and SLOs

  • Lower DB egress and instance class. Moving read-heavy paths to SQLite replicas cuts load on your primary by 50–70% in typical SaaS dashboards. That often means a smaller Postgres instance and fewer read replicas.
  • Fewer cache bugs, fewer incidents. Real SQL at the edge beats bespoke cache formats that drift. Root-causing a slow query in SQLite is easier than diffing two JSON blobs.
  • Predictable SLOs in far geos. A SQLite file in a São Paulo edge node puts Latin American users within 6–8 hours of time-zone overlap for support and under ~100 ms for reads. That’s a UX you can sell.

The trade-offs you must accept

  • One writer per DB. Embrace it: queue writes; keep critical write paths narrow; design UX for optimistic updates when writes route to a remote primary.
  • Operational maturity shifts up-stack. Without server-enforced auth/roles, your application must be disciplined about authorization, migrations, and per-tenant routing.
  • Replication semantics vary. D1, Turso, LiteFS, and Litestream all feel like SQLite but behave differently under write pressure and failure. Test failover, network partitions, and restore time like you would any database.

A 30–60–90 day plan to pilot SQLite without drama

Days 1–30: Prove value on a read-heavy feature

  1. Pick a dashboard or catalog view with known p95 > 300 ms.
  2. Define a minimal read model (denormalized tables + FTS) in SQLite.
  3. Generate the SQLite from Postgres nightly; then move to incremental updates every 10 seconds.
  4. Serve reads from the SQLite file locally; log fallbacks to Postgres.
  5. Measure p50/p95 and Postgres CPU drop. Target a 40–70% latency improvement.

Days 31–60: Push to the edge and add resilience

  1. Replicate the SQLite to two regions (e.g., US-East + South America) using object storage + CDN or LiteFS replicas.
  2. Add health checks: PRAGMA integrity_check; on deploy; alarms on replica lag.
  3. Automate snapshots via Litestream. Document a 15-minute restore drill.
  4. Gate feature rollout behind a flag; compare error budgets.

Days 61–90: Expand to per-tenant and offline

  1. Move one small customer cohort to per-tenant SQLite files. Validate deletion, restore, and support flows.
  2. Prototype an offline-first desktop utility (admin, importer, field app) backed by SQLite. Ship a private beta.
  3. Decide: edge-only, per-tenant isolation, or offline-first—pick one to productionize next quarter.

Vendor choices without lock-in

  • Cloudflare D1: Tight with Workers. Great for globally distributed reads. Mind the write routing and eventual consistency model.
  • Turso/libSQL: A server-ified SQLite with edge replicas. Straightforward developer ergonomics; keep an eye on compatibility nuances vs upstream SQLite.
  • Fly.io LiteFS: Real SQLite with page-level replication and a single primary. Excellent if you want maximum portability and control.
  • Litestream: Dead simple WAL shipping to object storage for durability and PITR. Perfect for per-tenant files.

You can mix these: D1/Turso for request-time reads, Litestream for durable backup, and LiteFS when you need read replicas you fully control.

What your team needs to unlearn

  • “SQLite is for prototypes.” It powers huge production systems (mobile OSes, browsers, IoT). Your SaaS can safely use it—if you respect the single-writer model.
  • “Caches must be key–value.” Most caches become ad-hoc databases. Start with a real database at the edge; it ages better.
  • “Data export is a report.” For serious portability promises, ship the actual data format. A single SQLite file is both human-inspectable and machine-friendly.

The nearshore angle

If you’re running a distributed engineering org with nearshore partners, SQLite reduces operational coupling. You can hand a team in Brazil a self-contained dataset and environment; they can build and test features offline, commit deterministic artifacts, and ship edge bundles without needing a full copy of your production Postgres. Less ceremony, more results—with 6–8 hours of workday overlap for tight feedback loops.

Bottom line

Postgres remains your backbone. But in 2026, treating SQLite as second-class is leaving speed, reliability, and cost savings on the table. Adopt it deliberately where it shines: at the edge, per-tenant, and offline. The Library of Congress’s endorsement is a wake-up call: self-contained formats win in the long run. Architect accordingly.

Key Takeaways

  • SQLite is now institutionally endorsed for preservation; it’s not a toy—use it where it fits.
  • Use SQLite for edge reads, per-tenant isolation, and offline-first UX; keep Postgres for high-concurrency writes and cross-tenant logic.
  • Modern tooling (D1, Turso, LiteFS, Litestream) makes server-side SQLite practical with replication and backups.
  • Expect 40–70% latency improvements on read-heavy features and 50–70% less load on your primary DB.
  • Design for a single writer: WAL mode, write queues, and clear fallbacks.
  • Security requires app- or volume-level encryption and authorization in your service layer.
  • Pilot in 90 days: start with a read model, push to the edge, then extend to per-tenant or offline-first.

Ready to scale your engineering team?

Tell us about your project and we'll get back to you within 24 hours.

Start a conversation