After analyzing thousands of PostgreSQL databases, we've identified patterns in performance issues that appear repeatedly. Here are the five most common problems and how to fix them.

1. Missing Indexes on Foreign Keys

The Problem: PostgreSQL doesn't automatically create indexes on foreign key columns. This leads to slow joins and CASCADE operations.

How to Detect:

SELECT c.conname AS constraint_name, tbl.relname AS table_name, a.attname AS column_name FROM pg_constraint c JOIN pg_class tbl ON tbl.oid = c.conrelid JOIN pg_attribute a ON a.attrelid = c.conrelid AND a.attnum = ANY(c.conkey) LEFT JOIN pg_index i ON i.indrelid = c.conrelid AND a.attnum = ANY(i.indkey) WHERE c.contype = 'f' AND i.indexrelid IS NULL;

The Fix:

-- Create index on foreign key CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders(customer_id);

2. Sequential Scans on Large Tables

The Problem: Queries scanning entire tables instead of using indexes, often due to poor selectivity or outdated statistics.

How to Detect:

SELECT schemaname, tablename, seq_scan, seq_tup_read, idx_scan, CASE WHEN seq_scan > 0 THEN round(100.0 * seq_scan / (seq_scan + idx_scan), 2) ELSE 0 END AS seq_scan_pct FROM pg_stat_user_tables WHERE seq_scan > 1000 ORDER BY seq_tup_read DESC;

The Fix:

  1. Update statistics: ANALYZE table_name;
  2. Check query plans: EXPLAIN (ANALYZE, BUFFERS) SELECT ...
  3. Consider partial indexes for specific conditions
  4. Increase random_page_cost if SSD-based

3. Bloated Tables and Indexes

The Problem: Dead tuples accumulating due to inadequate vacuuming, causing unnecessary I/O.

How to Detect:

WITH bloat AS ( SELECT schemaname, tablename, pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) AS size, n_dead_tup, n_live_tup, round(100.0 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup, 0), 2) AS dead_pct FROM pg_stat_user_tables ) SELECT * FROM bloat WHERE dead_pct > 20 ORDER BY dead_pct DESC;

The Fix:

-- Aggressive vacuum VACUUM (ANALYZE, VERBOSE) table_name; -- For severe bloat, consider VACUUM FULL (locks table!) VACUUM FULL table_name; -- Adjust autovacuum settings ALTER TABLE table_name SET ( autovacuum_vacuum_scale_factor = 0.05, autovacuum_analyze_scale_factor = 0.02 );

4. Lock Contention and Blocking Queries

The Problem: Long-running transactions blocking other queries, causing application timeouts.

How to Detect:

SELECT blocked_locks.pid AS blocked_pid, blocked_activity.usename AS blocked_user, blocking_locks.pid AS blocking_pid, blocking_activity.usename AS blocking_user, blocked_activity.query AS blocked_query, blocking_activity.query AS blocking_query FROM pg_catalog.pg_locks blocked_locks JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype AND blocking_locks.database IS NOT DISTINCT FROM blocked_locks.database AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid AND blocking_locks.pid != blocked_locks.pid JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid WHERE NOT blocked_locks.granted;

The Fix:

  1. Set statement timeouts: SET statement_timeout = '30s';
  2. Use lock timeouts: SET lock_timeout = '10s';
  3. Consider CONCURRENTLY options for index operations
  4. Review transaction isolation levels

5. Inefficient Query Plans

The Problem: Optimizer choosing suboptimal plans due to incorrect statistics or configuration.

How to Detect:

-- Find queries with high total time SELECT query, calls, total_time, mean_time, stddev_time, rows FROM pg_stat_statements WHERE calls > 100 ORDER BY total_time DESC LIMIT 20;

The Fix:

  1. Update planner statistics:

    ALTER TABLE table_name SET (autovacuum_analyze_scale_factor = 0.02); ANALYZE table_name;
  2. Adjust planner configuration:

    -- For SSD storage SET random_page_cost = 1.1; -- Increase statistics target for complex columns ALTER TABLE table_name ALTER COLUMN column_name SET STATISTICS 1000;
  3. Use optimizer hints (carefully):

    -- Force index usage SET enable_seqscan = OFF; -- Run query SET enable_seqscan = ON;

Prevention is Better Than Cure

While these fixes help address immediate issues, the best approach is preventing them. This is where tools like Azimutt Inspector come in - continuously monitoring your database to catch these issues before they impact production.

Inspector automatically detects all five of these issues and more, providing specific remediation steps tailored to your workload. Try it free or see it in action.


Have questions about PostgreSQL performance? Reach out at [email protected] or find us on Twitter.