Deadlocks in Java — How They Form, How to Find Them, and How to Design Around Them
by Eric Hanson, Backend Developer at Clean Systems Consulting
The four conditions — all must hold simultaneously
A deadlock requires four conditions, first described by Coffman et al. in 1971. All four must hold simultaneously; eliminating any one prevents the deadlock:
Mutual exclusion: at least one resource can only be held by one thread at a time. In Java, this means a synchronized block or Lock — only one thread can hold it.
Hold and wait: a thread holds at least one resource while waiting to acquire additional resources. A thread that holds lockA and is blocked waiting for lockB.
No preemption: resources cannot be forcibly taken from a thread. A thread releases a lock only voluntarily — by exiting the synchronized block or calling unlock().
Circular wait: a circular chain of threads each holds a resource wanted by the next. Thread 1 holds lockA and wants lockB; Thread 2 holds lockB and wants lockA.
The fourth condition — circular wait — is the one you can eliminate by design. The other three are inherent to Java's locking model.
The canonical deadlock
public class TransferService {
public void transfer(Account from, Account to, long amount) {
synchronized (from) { // acquires from's lock
synchronized (to) { // acquires to's lock
from.debit(amount);
to.credit(amount);
}
}
}
}
Thread 1: transfer(accountA, accountB, 100) — acquires accountA, waits for accountB.
Thread 2: transfer(accountB, accountA, 50) — acquires accountB, waits for accountA.
Both threads are blocked indefinitely. Neither will release what it holds until it acquires what it's waiting for.
Prevention strategy 1: lock ordering
Eliminating circular wait requires a consistent global ordering of lock acquisition. If all threads acquire locks in the same order, no circular dependency can form:
public void transfer(Account from, Account to, long amount) {
Account first = from.getId() < to.getId() ? from : to;
Account second = from.getId() < to.getId() ? to : from;
synchronized (first) {
synchronized (second) {
from.debit(amount);
to.credit(amount);
}
}
}
Locks are always acquired in ascending account ID order. Thread 1 and Thread 2 both try to acquire the lower-ID account first — one succeeds, the other blocks. When the first completes, the second proceeds. No circular wait possible.
Lock ordering works well when the set of locks is known at the point of acquisition. It breaks down when the locks are determined dynamically or when lock acquisition happens across multiple call frames where the ordering isn't visible locally.
Prevention strategy 2: tryLock with timeout
ReentrantLock.tryLock(timeout, unit) attempts to acquire a lock within a time limit. If it can't acquire within the limit, it returns false — the thread can release its held locks and retry or fail:
public boolean transfer(Account from, Account to, long amount)
throws InterruptedException {
while (true) {
if (from.getLock().tryLock(50, TimeUnit.MILLISECONDS)) {
try {
if (to.getLock().tryLock(50, TimeUnit.MILLISECONDS)) {
try {
from.debit(amount);
to.credit(amount);
return true;
} finally {
to.getLock().unlock();
}
}
} finally {
from.getLock().unlock();
}
}
// Failed to acquire both — back off and retry
Thread.sleep(ThreadLocalRandom.current().nextLong(1, 10));
}
}
If tryLock on the second lock fails, the thread releases the first lock and retries after a random backoff. The random backoff prevents livelock — two threads repeatedly failing at the same instant.
tryLock eliminates the "no preemption" condition effectively: the thread voluntarily releases its lock when it can't acquire the next one. The tradeoff is complexity — the retry loop, backoff, and potential livelock require careful handling. Lock ordering is simpler when applicable; tryLock is the fallback when ordering is impractical.
Prevention strategy 3: reducing lock scope
Deadlocks often arise from holding locks across method calls into unknown code. The calling code holds a lock; the called method tries to acquire another lock in a different order:
// Dangerous — holding lockA while calling external code that may acquire locks
synchronized (lockA) {
externalService.process(data); // may internally acquire lockB, then try to acquire lockA
}
The fix: release the lock before calling external code, or restructure to avoid holding locks across call boundaries:
// Prepare the data while holding the lock
DataSnapshot snapshot;
synchronized (lockA) {
snapshot = prepareData(); // only reads own state
}
// Process outside the lock — no locks held during external call
externalService.process(snapshot);
This requires that the external call doesn't need the locked state to remain consistent during processing — which is often achievable by taking a snapshot of the relevant data before releasing the lock.
Finding deadlocks with thread dumps
A deadlock produces a specific thread dump signature: threads in BLOCKED state, each waiting for a lock held by another thread in the cycle.
Take a thread dump with jstack or kill -3:
jstack <pid>
A deadlocked thread looks like:
"Thread-1" #14 prio=5 os_prio=0 tid=0x... nid=0x... waiting for monitor entry [0x...]
java.lang.Thread.State: BLOCKED (on object monitor)
at TransferService.transfer(TransferService.java:8)
- waiting to lock <0x000000076b373e60> (a Account)
- locked <0x000000076b373e28> (a Account)
"waiting to lock" and "locked" in the same thread's stack trace identify the thread in the deadlock. The address in waiting to lock matches the address in locked of another thread — that's the cycle.
The JVM's deadlock detector reports them directly:
Found one Java-level deadlock:
=============================
"Thread-1":
waiting to lock monitor 0x000000076b373e60 (object 0x..., a Account),
which is held by "Thread-2"
"Thread-2":
waiting to lock monitor 0x000000076b373e28 (object 0x..., a Account),
which is held by "Thread-1"
This output appears at the end of a jstack dump when a deadlock is detected. VisualVM and JMC (Java Mission Control) provide graphical thread dump analysis with deadlock highlighting.
Programmatic detection via JMX:
ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
long[] deadlockedThreads = threadMXBean.findDeadlockedThreads();
if (deadlockedThreads != null) {
ThreadInfo[] threadInfos = threadMXBean.getThreadInfo(deadlockedThreads, true, true);
for (ThreadInfo info : threadInfos) {
log.error("Deadlocked thread: {}", info);
}
}
Run this check periodically in a monitoring thread. Alert when deadlocked threads are detected — the application is effectively frozen for those threads and won't recover without intervention.
Database deadlocks — a different but related problem
Database deadlocks follow the same four conditions but involve row-level and table-level locks rather than Java locks. Two transactions each hold a lock on a row the other needs.
Preventing them requires the same principle — consistent lock ordering — applied to database operations:
// Always update accounts in consistent ID order within a transaction
@Transactional
public void transfer(long fromId, long toId, BigDecimal amount) {
long firstId = Math.min(fromId, toId);
long secondId = Math.max(fromId, toId);
Account first = accountRepository.findByIdWithLock(firstId); // SELECT FOR UPDATE
Account second = accountRepository.findByIdWithLock(secondId); // SELECT FOR UPDATE
// ... update in consistent order
}
SELECT FOR UPDATE acquires a row-level lock at the database. Acquiring them in consistent ID order prevents the circular wait.
Database deadlocks are detected by the database itself, which picks a victim transaction to roll back. The victim receives a deadlock exception (org.springframework.dao.DeadlockLoserDataAccessException in Spring). The correct response: retry the transaction with a short backoff. Most ORM frameworks and Spring's @Retryable handle this:
@Retryable(value = DeadlockLoserDataAccessException.class,
maxAttempts = 3,
backoff = @Backoff(delay = 100, multiplier = 2))
@Transactional
public void transfer(long fromId, long toId, BigDecimal amount) {
// ...
}
Structured concurrency as architectural prevention
Java 21's StructuredTaskScope prevents a class of deadlocks that arise from unstructured concurrent operations. In structured concurrency, subtasks are scoped to a parent task — the parent cannot complete until all subtasks complete or are cancelled:
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
Future<UserProfile> profile = scope.fork(() -> fetchProfile(userId));
Future<OrderHistory> orders = scope.fork(() -> fetchOrders(userId));
scope.join();
scope.throwIfFailed();
return new UserDashboard(profile.resultNow(), orders.resultNow());
}
Subtasks within a scope cannot acquire locks held by the parent and then wait for locks held by each other across scope boundaries — the structure prevents the circular dependency. This doesn't eliminate all deadlock scenarios, but it eliminates the class that arises from ad hoc task composition and fan-out patterns.
The design principle that prevents most deadlocks
Most production deadlocks are preventable at design time with one rule: acquire locks in a consistent global order, hold them for the minimum necessary scope, and never call unknown code while holding a lock.
The "never call unknown code while holding a lock" rule is the most frequently violated. Callbacks, listeners, event handlers, and service calls inside synchronized blocks all represent calls into code that may acquire additional locks in an unknown order. Releasing the lock before making external calls, taking a snapshot of necessary state first, eliminates this entire class of deadlock.
When lock ordering is impractical and external calls are unavoidable inside locked sections, tryLock with timeout and retry is the backstop. It converts a potential deadlock into a recoverable contention scenario — slower, more complex, but correct.