Traditional databases bundle compute and storage together — a monolithic architecture that means, rigid provisioning, proprietary formats, constant specialist oversight.
Lakebases are the third generation:
- Gen 1 (Monolith): MySQL, Postgres, Oracle — compute and storage on one machine, proprietary formats
- Gen 2 (Proprietary Loose Coupling): Aurora, Oracle Exadata — physically separated storage but still proprietary formats and single-engine lock-in
- Gen 3 (Lakebase): Data lives in open formats on cloud object stores (S3). Compute is serverless Postgres that scales to zero. Instant branching, cloning, and recovery.
This matters for AI because agent workloads are fundamentally different from traditional apps: they need to spin up many instances, experiment freely, branch databases like git branches, and pay only for what they use. Lakebases make the database match the agent’s workflow instead of the other way around.
This is the infrastructure-level answer to Agents eat your system of record — the rigid app was the constraint, not the schema — while that insight argues agents eliminate the need for rigid schemas, lakebases eliminate the operational rigidity of the database itself. Together, both the schema and the infrastructure become elastic.
The Postgres compatibility connects to Boring tech wins for AI-native startups — simpler stack means faster AI-assisted shipping — you get the familiar SQL interface but with cloud-native elasticity underneath. And the shift from proprietary lock-in to open formats echoes the build-for-obsolescence principle: vendor-specific database ops are scaffolding that open architecture sheds.