PostgreSQL vs MySQL — My Honest Take After Using Both in Production
by Eric Hanson, Backend Developer at Clean Systems Consulting
This debate was closer a decade ago
If you are starting a new project today, choose PostgreSQL. That is the short version. The longer version explains why MySQL still exists in production environments I work on, what genuine advantages it retains, and when you might legitimately reach for it.
What PostgreSQL does that MySQL cannot match
PostgreSQL's type system is materially richer than MySQL's. The types that change how you model data:
JSONB: Binary JSON storage with GIN indexing. You can store semi-structured documents and query into them efficiently. Not a replacement for a document database on high-write workloads, but for product catalogs, metadata, and configuration — where reads outnumber writes and the schema is fluid — it eliminates the need to add a separate MongoDB instance.
-- PostgreSQL JSONB — index specific JSON paths
CREATE INDEX idx_product_metadata ON products USING GIN (metadata);
-- Query into JSON without a full scan
SELECT id, name, metadata->>'brand'
FROM products
WHERE metadata @> '{"certifications": ["organic"]}'
AND metadata->>'category' = 'food';
-- MySQL has JSON type but no true GIN-equivalent index
-- This forces a full table scan or awkward generated column workaround
Arrays: Native array columns with GIN index support. For multi-value attributes — tags, user roles, feature flags per entity — PostgreSQL arrays eliminate the need for a junction table on read-heavy workloads.
Table inheritance and partitioning: PostgreSQL's declarative partitioning (available since Postgres 10) handles time-series data and large table management cleanly. Partition pruning is automatic in query planning. MySQL's partitioning is functional but the implementation limitations (foreign key constraints not supported, partition-wise join missing until MySQL 8.0) have burned teams.
Full-text search: PostgreSQL's tsvector / tsquery with GIN indexes handles basic to intermediate full-text search needs without adding Elasticsearch. For a product search or knowledge base where Elasticsearch is overkill, this covers the 80% case.
Window functions and CTEs: Both databases support these now, but PostgreSQL's implementation is more complete and better optimized. Recursive CTEs in particular are a PostgreSQL strength — hierarchical data (org charts, comment threads, category trees) is elegant in PostgreSQL and painful in MySQL without procedural workarounds.
What MySQL gets right
Replication simplicity: MySQL's binary log replication is battle-tested to an extreme degree. The tooling ecosystem — Percona Toolkit, ProxySQL, Orchestrator — is mature and well-documented. Teams with existing MySQL replication expertise will find this easier to operate than PostgreSQL's streaming replication and logical replication models, which are excellent but have a steeper learning curve for edge cases.
Managed cloud availability: Amazon RDS for MySQL and Aurora MySQL are production-proven at enormous scale. Aurora MySQL's storage engine with 6-way replication and sub-second failover is one of the most reliable managed database products available. If your team's operational expertise is in RDS MySQL, the migration cost to RDS PostgreSQL is not zero — Aurora PostgreSQL is excellent, but the operational knowledge does not transfer entirely.
PHP/WordPress ecosystem compatibility: Significant portions of the web run on MySQL because the LAMP stack made MySQL the default for a generation of developers. If you are building tooling that integrates with or extends existing MySQL-backed systems, adding PostgreSQL is friction without benefit.
The one area where MySQL still surprises me
MySQL's InnoDB storage engine handles high-concurrency OLTP workloads with heavy row-level locking very well. In benchmarks I have run on comparable hardware (32 vCPU, 64GB RAM, NVMe SSD) with 500 concurrent connections doing mixed read-write on 100M row tables, MySQL 8.0 and PostgreSQL 15 are within 10-15% of each other for point-read and point-write throughput. Neither is a bottleneck at the loads most applications reach.
Where they diverge is on complex analytical queries alongside OLTP load. PostgreSQL's query planner handles this better — more sophisticated statistics, better join ordering, parallel query execution (available since Postgres 9.6). If your application runs dashboard-style analytical queries on your production OLTP database (common in mid-size companies that have not yet separated OLAP workloads), PostgreSQL holds up better under mixed load.
The migration question
If you are on MySQL and it is working, the migration cost to PostgreSQL is high and the reward is incremental unless you are hitting a specific MySQL limitation. Do not migrate because PostgreSQL is technically superior on paper.
Migrate if: you need JSONB for a semi-structured data problem, you are hitting MySQL's JSON index limitations, you need the full-text search capabilities without a separate search service, or your team is growing and new engineers strongly prefer PostgreSQL tooling.
Do not migrate if: your MySQL setup is stable, your team operates it well, and the performance is sufficient. "Better database" is not a business justification for a multi-week migration with non-trivial risk.
New projects: PostgreSQL. Not because MySQL is bad, but because PostgreSQL's feature set is broader and you are likely to need something from it within the first year.