Introduction: Why PostgreSQL Deadlock Debugging Matters
When a production PostgreSQL database freezes under load, the root cause is often hidden behind locks: long waits, blocked transactions, and sometimes hard deadlocks. In my experience, teams usually notice this only after users start complaining that the app is “stuck” and queries never finish.
PostgreSQL deadlock debugging is critical because a single bad pattern in application code or schema design can cascade into timeouts, failed transactions, and lost revenue. The good news is that PostgreSQL already exposes most of what we need to diagnose these issues quickly, especially through system views like pg_locks and related catalog tables.
In this guide, I’ll walk through how I approach a live incident: using pg_locks and query inspection to identify which sessions are blocking others, pinpoint the exact SQL and objects involved, and decide whether to kill sessions, change lock order, or adjust indexes. The goal is to give you a practical, repeatable workflow you can apply the next time your production database starts deadlocking under pressure.
How PostgreSQL Deadlocks and Lock Contention Actually Happen
When I’m debugging lock issues in PostgreSQL, I always come back to three basics: what is locked, how transactions overlap, and which isolation level is in use. Once those are clear, the symptoms in the logs start to make sense.
Row and Table Locks in Practice
PostgreSQL uses row-level locks (like FOR UPDATE) and heavier table-level locks (like ACCESS EXCLUSIVE). A typical application request opens a transaction, touches a few rows, and holds those locks until COMMIT or ROLLBACK. Under light load, this is invisible; under heavy concurrency, those held locks turn into long wait chains.
How a Deadlock Forms
A deadlock happens when two or more transactions wait on each other in a cycle. For example, one transaction updates row A then tries to update row B, while another transaction has already locked row B and then tries to lock row A. Neither can proceed, so PostgreSQL detects the cycle and aborts one of them with a deadlock error. In my experience, inconsistent lock ordering across code paths is the most common root cause.
Role of Transaction Isolation
PostgreSQL’s default READ COMMITTED isolation hides many conflicts by always reading the latest committed version of a row, but write/write conflicts still require locks. At stricter levels like SERIALIZABLE, the database must protect against anomalies, which increases lock contention and can surface more deadlocks. When I see frequent deadlocks in logs, I check both the isolation level and whether the same rows are being touched in different orders across transactions PostgreSQL Documentation: 13.2. Transaction Isolation.
Using pg_locks and pg_stat_activity to Inspect Blocking Sessions
When I’m paged for lock issues, my first move is always to join pg_locks with pg_stat_activity. That immediately shows me who is blocked, who is doing the blocking, and what SQL is involved. Once I have that, deciding what to kill or tune becomes straightforward.
Find Blocked Sessions and Their Blockers
This query is the one I keep ready to paste into psql during a live PostgreSQL deadlock debugging session. It lists each blocked backend, the blocking backend, and the queries on both sides:
SELECT
blocked.pid AS blocked_pid,
blocked.usename AS blocked_user,
blocked.query AS blocked_query,
blocking.pid AS blocking_pid,
blocking.usename AS blocking_user,
blocking.query AS blocking_query,
blocked.wait_event_type,
blocked.wait_event,
now() - blocked.query_start AS blocked_for
FROM pg_locks bl
JOIN pg_stat_activity blocked
ON bl.pid = blocked.pid
JOIN pg_locks kl
ON bl.locktype = kl.locktype
AND bl.database IS NOT DISTINCT FROM kl.database
AND bl.relation IS NOT DISTINCT FROM kl.relation
AND bl.page IS NOT DISTINCT FROM kl.page
AND bl.tuple IS NOT DISTINCT FROM kl.tuple
AND bl.virtualxid IS NOT DISTINCT FROM kl.virtualxid
AND bl.transactionid IS NOT DISTINCT FROM kl.transactionid
AND bl.classid IS NOT DISTINCT FROM kl.classid
AND bl.objid IS NOT DISTINCT FROM kl.objid
AND bl.objsubid = kl.objsubid
JOIN pg_stat_activity blocking
ON kl.pid = blocking.pid
WHERE NOT bl.granted
AND kl.granted
ORDER BY blocked_for DESC;
In my experience, this single view of blocked vs. blocking sessions answers 80% of the questions during an incident: it shows which query patterns and tables are causing everything else to line up behind them.
Spot Long-Running and Lock-Heavy Queries
Once I know the blockers, I want to see which ones have been running too long and are likely to cause deadlocks or widespread contention. This query surfaces long-running statements along with basic lock context:
SELECT
a.pid,
a.usename,
a.state,
a.query,
now() - a.query_start AS runtime,
l.locktype,
l.mode,
l.granted,
to_regclass(l.relation) AS relation
FROM pg_stat_activity a
LEFT JOIN pg_locks l
ON a.pid = l.pid
WHERE a.state <> 'idle'
AND a.query_start IS NOT NULL
ORDER BY runtime DESC
LIMIT 50;
When I first started doing this kind of work, I underestimated how often a single long-running maintenance or reporting query would silently hold locks that block normal OLTP traffic. This view makes those easy to spot so I can choose to cancel them, reschedule them, or tune the queries and indexes involved.
pg_blocking_pids and Modern Shortcuts for PostgreSQL Deadlock Debugging
On newer PostgreSQL versions, I lean heavily on pg_blocking_pids() to skip the complicated lock-graph joins. It tells me, for any backend, exactly which other PIDs are blocking it, which makes live PostgreSQL deadlock debugging much faster.
Quickly See Who Is Blocking Whom
Instead of building a big join between pg_locks and pg_stat_activity, I now start with a simple query that uses pg_blocking_pids() to list blocked backends and their blockers side by side:
SELECT
a.pid AS blocked_pid,
a.usename AS blocked_user,
a.query AS blocked_query,
pg_blocking_pids(a.pid) AS blocking_pids,
a.wait_event_type,
a.wait_event,
now() - a.query_start AS blocked_for
FROM pg_stat_activity a
WHERE cardinality(pg_blocking_pids(a.pid)) > 0
ORDER BY blocked_for DESC;
In my experience, this becomes the new “first look” query: if the blocking_pids array is non-empty, I know exactly which sessions to inspect next, without manually reasoning through every lock row.
Expand Blocking Chains Into Readable Rows
To see the full picture, including the SQL text on the blocking side, I use unnest() to turn the blocking PID array into rows and join back to pg_stat_activity:
SELECT
blocked.pid AS blocked_pid,
blocked.query AS blocked_query,
blocking.pid AS blocking_pid,
blocking.query AS blocking_query,
now() - blocked.query_start AS blocked_for
FROM pg_stat_activity blocked
JOIN LATERAL unnest(pg_blocking_pids(blocked.pid)) AS b(pid)
ON true
JOIN pg_stat_activity blocking
ON blocking.pid = b.pid
ORDER BY blocked_for DESC;
When I adopted this pattern, it replaced a whole library of custom lock-inspection scripts. It also made it much easier to explain incidents to teammates, since the output reads almost like a conversation: “this query is blocked by that query,” with the chain clearly spelled out Postgres session is blocked by PID 0 (because of .Net core transactions?) – Stack Overflow.
Reading Deadlock Error Logs and Reproducing the Scenario
Whenever I’m called in after the fact, the deadlock section in the PostgreSQL logs is my primary clue. If I can translate that log into a concrete sequence of statements, I can usually either fix the problem or at least reproduce it in a safe environment.
Understanding the Deadlock Log Output
PostgreSQL’s deadlock logs usually show:
- The statement that was canceled.
- The other transaction(s) it was waiting on.
- The lock types and objects (often relation or tuple identifiers).
I start by mapping relation OIDs to table names using to_regclass() or a quick query, so the log entries reference real tables instead of just numbers. Then I pay close attention to the order of locks: which transaction locked which table or row first, and what it tried to lock next. This lock order often reveals the inconsistent pattern in application code that created the cycle.
Reproducing the Deadlock With Simple Sessions
To make the issue understandable (and test fixes), I like to recreate the deadlock using two psql sessions, each mimicking one side of the conflict. Here’s a minimal example I use when explaining the pattern to teammates:
-- session 1 BEGIN; UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- locks row 1 -- session 2 BEGIN; UPDATE accounts SET balance = balance - 100 WHERE id = 2; -- locks row 2 -- back to session 1 UPDATE accounts SET balance = balance - 100 WHERE id = 2; -- waits on session 2 -- back to session 2 UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- deadlock cycle
In my experience, building a trimmed-down script like this for the real production scenario is the fastest way to confirm that I’ve correctly understood the deadlock and to validate any changes in lock ordering, indexing, or transaction boundaries before they hit production again.
Practical Fixes: Reducing Lock Contention and Avoiding Deadlocks
Once I’ve used pg_locks and the logs to understand a deadlock, the real work begins: changing queries and schemas so it doesn’t happen again. The good news is that a few concrete patterns solve most PostgreSQL deadlock debugging cases I see in practice.
Standardize Lock Ordering in Your Application
The most effective rule I’ve applied with teams is: always touch shared resources in the same order. If multiple code paths update rows in tables A and B, decide on a global order (say, A then B) and enforce it everywhere. In ORMs, this often means being explicit about query order and avoiding “grab whatever row first” patterns that depend on data rather than a stable key order.
Use Indexes and Shorter Transactions to Hold Locks for Less Time
Poor indexing quietly amplifies lock contention. A missing index can force a sequential scan, holding locks on many more rows for much longer. I like to check slow, blocking queries with EXPLAIN (ANALYZE, BUFFERS) and add indexes that narrow the locked row set.
At the same time, I aggressively shrink transaction scope: move non-critical work (logging, external calls, heavy reads) outside the transaction, commit earlier, and avoid long-lived interactive transactions. Even shaving a second or two off critical write transactions can dramatically reduce contention during peak traffic Improve PostgreSQL performance: Diagnose and mitigate lock manager contention.
Right-Size Isolation Levels and Locking Constructs
In my experience, some systems default to stricter isolation or heavier locks than they really need. Dropping from SERIALIZABLE to REPEATABLE READ or READ COMMITTED for non-critical workloads can lower contention, as long as you understand the trade-offs. Similarly, replacing aggressive patterns like SELECT … FOR UPDATE on large result sets with more targeted row locking (or optimistic concurrency checks using version columns) often removes the deadlock pressure without sacrificing correctness.
Conclusion and Quick PostgreSQL Deadlock Debugging Checklist
Over the years, I’ve learned that successful PostgreSQL deadlock debugging is mostly about having a clear, repeatable routine. With a few stock queries and a mental checklist, even messy production incidents become manageable instead of chaotic.
Here’s the quick checklist I keep handy during incidents:
- Confirm the symptom: Look for deadlock errors and lock wait messages in logs and application traces.
- List blocked sessions: Query pg_stat_activity with pg_blocking_pids() or your favorite pg_locks join.
- Identify the blocker: Focus on long-running, lock-heavy queries and maintenance jobs holding key locks.
- Inspect SQL and tables: Map relation OIDs to table names and review the exact statements on both sides.
- Reproduce simply: Recreate the lock sequence with two or more psql sessions to validate your understanding.
- Apply fixes: Standardize statement ordering, improve indexes, shorten transactions, and right-size isolation levels.
- Retest under load: Run the reproduced scenario (or a load test) with the new patterns to confirm the deadlock is gone.
If you turn this checklist into a runbook for your team, the next time a deadlock hits at 2 a.m., you’ll have a calm, deliberate process to follow instead of starting from scratch.

Hi, I’m Cary Huang — a tech enthusiast based in Canada. I’ve spent years working with complex production systems and open-source software. Through TechBuddies.io, my team and I share practical engineering insights, curate relevant tech news, and recommend useful tools and products to help developers learn and work more effectively.





