We've all been there: a rogue query is consuming all resources, your application is timing out, and you need to act fast. But how do you stop a PostgreSQL query without making things worse? Let's explore the right way to handle this critical situation.
The Two Weapons in Your Arsenal
PostgreSQL provides two functions for stopping queries, each with different levels of aggression:
1. pg_cancel_backend(pid) - The Gentle Approach
This function politely asks a query to stop:
- What it does: Cancels the query currently running on the connection pid
- Connection status: Keeps the connection alive
- Use case: When you want to stop a long-running query but keep the application connected
- Recovery: The connection can immediately run new queries
-- Cancel the current query SELECT pg_cancel_backend(2381); -- Cancel a specific query SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE pid = 2381 AND query_id = '51172547141599287'; -- Cancel all queries running for longer than 5 minutes SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE state = 'active' AND query_start < now() - '5 minutes'::interval AND pid != pg_backend_pid() -- Don't kill yourself AND pg_blocking_pids(pid) = '{}'; -- Don't kill blocked queries
2. pg_terminate_backend(pid) - The Nuclear Option
This function forcefully terminates the entire connection:
- What it does: Kills the connection and all its processes
- Connection status: Connection is closed
- Use case: For unresponsive sessions or when cancel doesn't work
- Recovery: Application must reconnect
-- Terminate a specific connection SELECT pg_terminate_backend(2381); -- Terminate all idle in transaction sessions older than 1 hour SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle in transaction' AND state_change < now() - '1 hour'::interval;
Real-World Scenarios and Solutions
Scenario 1: Runaway Analytics Query
A data analyst accidentally runs a massive JOIN without proper filters:
-- First, identify the culprit SELECT pid, usename, application_name, client_addr, query_start, query_id, query FROM pg_stat_activity WHERE state = 'active' AND query_start < now() - '10 minutes'::interval ORDER BY query_start; -- Then cancel it gently SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE state = 'active' AND pid = 2381 AND query_id = '51172547141599287';
Scenario 2: Application Bug Causing Lock Wait
Your application has a bug causing transactions to hang:
-- Find blocking and blocked queries SELECT blocked.pid AS blocked_pid , blocked.usename AS blocked_user , blocked.query AS blocked_query , blocking.pid AS blocking_pid , blocked.pid AS blocked_user , blocking.query AS blocking_query FROM pg_stat_activity blocked JOIN pg_stat_activity blocking ON blocking.pid = ANY(pg_blocking_pids(blocked.pid)); -- Terminate the blocking session (more aggressive for lock issues) SELECT pg_terminate_backend(blocking_pid) FROM ( SELECT DISTINCT blocking.pid as blocking_pid FROM pg_stat_activity blocked JOIN pg_stat_activity blocking ON blocking.pid = ANY(pg_blocking_pids(blocked.pid)) WHERE blocked.wait_event = 'Lock' ) blockers;
Scenario 3: Too Many Idle Connections
Your connection pool isn't working correctly:
-- Clean up idle connections older than 30 minutes SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle' AND state_change < now() - '30 minutes'::interval AND application_name NOT IN ('critical_app', 'monitoring');
Safety First: The Golden Rules
Rule 1: Always Preview Before Executing
-- ALWAYS run this first SELECT pid, usename, application_name, query, query_start FROM pg_stat_activity WHERE /* your conditions */; -- THEN run the cancel/terminate
Rule 2: Protect Critical Processes
-- Never kill these SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE state = 'active' AND application_name NOT IN ('replication', 'backup', 'critical_etl') AND usename NOT IN ('postgres', 'replication_user') AND pid <> pg_backend_pid();
Rule 3: Start Gentle, Escalate if Needed
-- Step 1: Try cancel first SELECT pg_cancel_backend(2381); -- Wait 30 seconds... -- Step 2: If still there, terminate SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid = 2381; -- Verify it still exists
Common Mistakes to Avoid
- Killing your own session: Always exclude
pg_backend_pid()
- Not checking application_name: You might kill critical processes
- Using terminate when cancel would work: Try the gentle approach first
- Not logging actions: Keep track of what you killed and why
- Killing replication connections: This can break your standby servers
Having well-defined users and application names is key to avoid most mistakes.
Monitoring and Prevention
Instead of constantly firefighting, set up proactive monitoring:
-- Create a view for problematic queries CREATE VIEW problematic_queries AS SELECT pid , usename , application_name , state , now() - query_start as duration , wait_event , query FROM pg_stat_activity WHERE ((state = 'active' AND query_start < now() - '5 minutes'::interval) OR (state = 'idle in transaction' AND state_change < now() - '10 minutes'::interval) OR (state IS NOT NULL AND wait_event IS NOT NULL)) ORDER BY CASE state WHEN 'active' THEN 1 WHEN 'idle in transaction' THEN 2 ELSE 3 END, duration DESC; -- Check regularly SELECT * FROM problematic_queries;
The Nuclear Emergency Option
In extreme cases where the database is completely unresponsive:
-- Terminate ALL non-system connections (USE WITH EXTREME CAUTION) SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'your_database' AND pid <> pg_backend_pid() AND usename NOT IN ('postgres', 'replication_user');
Key Takeaways
- pg_cancel_backend = surgical strike (query only)
- pg_terminate_backend = carpet bombing (entire connection)
- Always preview your targets before firing
- Protect critical processes with exclusion lists
- Automate common scenarios with safe functions
- Monitor proactively to prevent emergency situations
Next Steps
Now that you can handle queries reactively, the next post will cover proactive monitoring with pg_stat_statements
to identify performance issues before they become critical.
For automated query management and intelligent alerting, check out Azimutt Inspector, which handles these scenarios automatically while you sleep.
Need Expert Guidance?
Dealing with runaway queries or complex locking issues? Want personalized advice for your specific PostgreSQL setup? I offer a free 30-minute consultation session to help you solve your database challenges.
📅 Book a free discussion slot - Let's troubleshoot your database together!
Remember: with great power comes great responsibility. Use these commands wisely!