Stop buying databases you don't need


Someone at RudderStack chose Postgres over Kafka to handle 100,000 events per second. The Reddit thread had engineers nodding along. One comment said "Postgres is the best or 2nd-best choice for almost everything". Another added that most specialty databases are highly specialized and bad at everything else.
This isn't just one team's hot take. Supabase ran benchmarks comparing pgvector to Pinecone's premium tier. Postgres won. By a lot.
28x lower latency. 16x higher throughput. At 25% the cost.
You've probably seen the "just use Postgres for everything" posts floating around. Maybe you rolled your eyes. i did. Then i looked at what people were actually doing with it. Turns out they're not just talking. They're replacing entire stacks.
What people are actually replacing
A developer wrote about ditching MongoDB for a single Postgres table with JSONB. No jokes. No vendor pitch. Just a migration script and some relief.
The Amazing CTO blog laid it out plainly. Use Postgres for caching instead of Redis. For message queues instead of Kafka. For document storage instead of MongoDB. For vector search instead of Pinecone.
Here's what surprised me. These aren't toy projects. Real companies doing real scale. The RudderStack team needed SQL-like querying to debug their event streams. They needed to modify metadata on the fly. Kafka couldn't do that.
The numbers that made people switch
i used to think vector databases were a must-have for AI apps. Then Supabase published their comparison.
They tested pgvector against Pinecone's fastest offering. The p2 index. Pinecone's premium option that trades accuracy for speed.
Postgres still won.
And it wasn't close. To match Pinecone's performance, you'd need 12 to 13 p2 pods. That's $2,000 per month. The Postgres setup cost $410.
One Redditor said "Postgres can replace up to millions of users many backend technologies". Another pointed out that most of the "specialized" databases just add operational overhead and multiple failure points.
The part where this gets messy
But here's the thing no one mentions in the hype posts. Extensions.
You want pgvector for AI search? You need to install it. PG Cron for scheduling? Install it. PGCrypto? Same.
That sounds easy until you work at a company where you're not the database admin.
Someone on Dev.to put it perfectly. No installing extensions. No adjusting schemas without a ticket. Sometimes you don't even know which region your database is in.
"Suddenly your one tool for everything dream feels like playing Elden Ring with a potato for a GPU".
When Postgres actually struggles
Most tutorials won't tell you this. Postgres isn't great for real-time analytics. If you need refresh rates measured in minutes, not hours, you're going to have problems.
Massive single datasets with billions of rows and constant joins? Queries can take hours. Partitioning helps but adds complexity.
And here's the weird one. No native connection pooling. Postgres just rejects connections after max_connections. You need an external pooler.
One developer advocate from QuestDB said they often see people move out of Postgres for ingestion speed. When you're handling thousands or millions of rows per second, Postgres struggles.
People who name things
This is random but i think about it a lot. Someone named their Postgres extension "pgvector." Simple. Clear. No marketing fluff.
Compare that to enterprise databases with names like "SynergyDB Enterprise Cloud Solution." You know exactly which one was built by engineers for engineers.
The best tools have boring names. grep. sed. awk. Postgres. They just tell you what they do.
Who shouldn't bother with this
If you're already running Pinecone and it works fine, don't switch. Seriously.
If you need that last 10% of performance and money isn't an issue, specialty databases win. Weaviate can search 10 million embeddings in milliseconds. Postgres takes seconds.
If your data is time-series heavy with insane ingestion rates, use a time-series database.
Most people aren't in those situations. Most people have a few thousand users. Maybe a few hundred thousand. For them, adding five different databases is just complexity they don't need.
The real reason people do this
It's not about performance. Not really.
One developer said it's about faster feature development. One point of expertise instead of five. Unified monitoring. Single backup strategy.
When you use Postgres for everything, your junior developers don't need to learn four different query languages. Your ops team doesn't need to monitor four different systems. Your backup scripts work the same way everywhere.
That boring simplicity saves weeks of work. Maybe months.
i still think about that RudderStack decision. They could have gone with Kafka. It's what everyone uses for event streaming. But they chose Postgres because they needed to actually look at their queues. Debug them. Retry failed events.
That's not something benchmarks measure.
Enjoyed this article? Check out more posts.
View All Posts