Deadlocks in SQL: Why They Happen and How to Avoid Them
by Eric Hanson, Backend Developer at Clean Systems Consulting
The error you can't reproduce locally
ERROR: deadlock detected. DETAIL: Process 14823 waits for ShareLock on transaction 7291; blocked by process 14819. Process 14819 waits for ShareLock on transaction 7283; blocked by process 14823.
Deadlocks show up in production logs intermittently, always under load, never in single-threaded testing. A junior developer retries the failed transaction and it works. The issue is marked resolved. It happens again in two weeks.
Deadlocks are not random. They are the deterministic result of two or more transactions acquiring locks in incompatible orders. If the code that caused the deadlock is still running, it will deadlock again under the same concurrency conditions.
The minimal deadlock example
Two transactions, two rows:
-- Session A -- Session B
BEGIN; BEGIN;
UPDATE accounts SET UPDATE accounts SET
balance = balance - 100 balance = balance - 50
WHERE id = 1; -- Locks row 1 WHERE id = 2; -- Locks row 2
UPDATE accounts SET UPDATE accounts SET
balance = balance + 100 balance = balance + 50
WHERE id = 2; -- Waits for row 2 WHERE id = 1; -- Waits for row 1
-- (locked by B) -- (locked by A)
-- DEADLOCK
Transaction A holds row 1 and wants row 2. Transaction B holds row 2 and wants row 1. Neither can proceed. The database detects this cycle (via a wait-for graph) and aborts one transaction, allowing the other to complete.
The database chooses which transaction to abort based on cost — typically it picks the transaction that has done less work. The aborted transaction receives an error; it must be retried by the application.
Prevention strategy 1: consistent lock ordering
The canonical fix. If every transaction acquires locks in the same order, circular dependencies are impossible.
# Wrong: order of IDs varies per call
def transfer(from_id, to_id, amount):
with transaction():
debit(from_id, amount)
credit(to_id, amount)
# transfer(1, 2, 100) and transfer(2, 1, 50) can deadlock
# Right: always lock lower ID first
def transfer(from_id, to_id, amount):
first_id, second_id = sorted([from_id, to_id])
with transaction():
lock_row(first_id)
lock_row(second_id)
debit(from_id, amount)
credit(to_id, amount)
This is the simplest and most reliable deadlock prevention technique. The tradeoff: it requires you to know all the rows you'll lock before the transaction starts. When that's not possible, other strategies apply.
Prevention strategy 2: SELECT FOR UPDATE to lock upfront
Instead of locking rows when they're first modified, lock them at the start of the transaction in a consistent order using SELECT FOR UPDATE:
BEGIN;
-- Lock both rows upfront, in ID order, before any modification
SELECT id FROM accounts WHERE id IN (1, 2) ORDER BY id FOR UPDATE;
-- Now modify freely — locks are already held in consistent order
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;
SELECT ... FOR UPDATE acquires an exclusive row lock immediately. Since all transactions acquire locks in the same order (ORDER BY id), circular dependencies are prevented.
SELECT ... FOR SHARE acquires a shared lock — multiple transactions can hold it simultaneously, but it blocks exclusive locks. Use it when you need to prevent a row from being modified while you read it, without preventing other readers.
Prevention strategy 3: reduce lock scope and duration
Deadlocks are more likely when transactions hold many locks for a long time. Reducing lock scope reduces the probability of overlap:
- Keep transactions short (see the prior article on transaction scope)
- Lock only the rows you need to modify, not rows you only read
- Avoid unnecessary reads inside transactions that could cause table-level locks
Prevention strategy 4: SKIP LOCKED for queue-style workloads
When multiple workers process a queue of tasks from a database table, deadlocks occur when workers compete for the same rows:
-- Worker A and Worker B both try to lock the same "next available" row
SELECT * FROM jobs WHERE status = 'pending' LIMIT 1 FOR UPDATE;
SKIP LOCKED tells each worker to skip rows already locked by another worker:
-- Each worker gets a different row — no contention, no deadlock
SELECT * FROM jobs WHERE status = 'pending'
ORDER BY created_at
LIMIT 1
FOR UPDATE SKIP LOCKED;
This is the correct pattern for database-backed work queues. It's supported in PostgreSQL (9.5+), MySQL (8.0+), and SQL Server.
When deadlocks happen anyway
Despite prevention efforts, deadlocks can occur due to implicit lock acquisition order (triggered by indexes, foreign keys, or triggers). The correct response is to make your application deadlock-tolerant:
import psycopg2
import time
MAX_RETRIES = 3
def execute_with_retry(transaction_fn):
for attempt in range(MAX_RETRIES):
try:
return transaction_fn()
except psycopg2.errors.DeadlockDetected:
if attempt == MAX_RETRIES - 1:
raise
# Brief backoff before retry
time.sleep(0.05 * (attempt + 1))
Retry logic is not a substitute for prevention — a deadlock under load will retry repeatedly, amplifying the problem. But it's a necessary safety net.
Diagnosing deadlocks in production
PostgreSQL logs deadlock details when log_min_messages = log and deadlock_timeout is exceeded (default 1 second). Enable log_lock_waits = on to see lock waits that don't become deadlocks. The log entries show which processes were involved and which queries they were running — the lock ordering problem is usually apparent from the two queries.