Spring Boot and Database Connection Leaks — How They Happen and How to Find Them
by Eric Hanson, Backend Developer at Clean Systems Consulting
What a connection leak actually is
A connection leak occurs when a database connection is acquired from the pool and never returned. The connection remains allocated — it's not available for other requests — but no useful work is being done on it.
HikariCP tracks connection state: idle (in pool, available), active (checked out, in use), and pending (waiting for a connection). A leak manifests as active count growing steadily while idle count drops toward zero. When the pool is fully active and new requests can't acquire a connection within connectionTimeout, they fail with:
Unable to acquire JDBC Connection
java.sql.SQLTransactionRollbackException: Connection is not available, request timed out after 30000ms
By this point, every new request that needs the database is failing. The service is effectively down for database operations.
How connection leaks form in Spring Boot
Unclosed connections in manual JDBC
The most obvious source — JDBC connections obtained manually and not closed:
// Leak — connection never returned to pool on exception
public List<Order> findOrders(String customerId) {
Connection conn = dataSource.getConnection();
PreparedStatement ps = conn.prepareStatement("SELECT * FROM orders WHERE customer_id = ?");
ps.setString(1, customerId);
ResultSet rs = ps.executeQuery();
List<Order> orders = mapResults(rs); // if this throws, conn is never closed
conn.close(); // only reached if no exception
return orders;
}
// Fixed — try-with-resources
public List<Order> findOrders(String customerId) {
try (Connection conn = dataSource.getConnection();
PreparedStatement ps = conn.prepareStatement("SELECT * FROM orders WHERE customer_id = ?")) {
ps.setString(1, customerId);
try (ResultSet rs = ps.executeQuery()) {
return mapResults(rs);
}
}
}
try-with-resources closes ResultSet, PreparedStatement, and Connection in reverse order, including on exceptions. This is the correct pattern for manual JDBC. JdbcTemplate handles this internally — using it eliminates this entire category of leak.
@Transactional scope and unexpected connection retention
@Transactional acquires a connection when the transaction starts (the first database operation within the transaction, not when the method is entered) and releases it when the transaction commits or rolls back. Code that holds a transaction open while doing non-database work retains the connection for the entire duration:
@Transactional
public void processOrder(Long orderId) {
Order order = orderRepository.findById(orderId).orElseThrow(); // connection acquired here
// 5-second external HTTP call — connection held while waiting
PaymentResult payment = paymentGateway.charge(order.getTotal()); // holding connection for 5s
order.setPaymentId(payment.getId());
orderRepository.save(order); // connection still held
} // connection released here
During the 5-second HTTP call, the connection is in the pool's "active" set — unavailable to other requests. With a pool of 20 connections and 20 concurrent requests making this call, the pool exhausts in seconds.
The fix: perform external calls outside the transaction boundary:
public void processOrder(Long orderId) {
// Load data in a short transaction
Order order = loadOrder(orderId);
// External call outside any transaction — no connection held
PaymentResult payment = paymentGateway.charge(order.getTotal());
// Short transaction for the write
updateOrderPayment(orderId, payment.getId());
}
@Transactional(readOnly = true)
private Order loadOrder(Long orderId) {
return orderRepository.findById(orderId).orElseThrow();
}
@Transactional
private void updateOrderPayment(Long orderId, String paymentId) {
Order order = orderRepository.findById(orderId).orElseThrow();
order.setPaymentId(paymentId);
orderRepository.save(order);
}
Two short transactions with no connection held during the external call. The connection pool handles many more concurrent requests with the same pool size.
Transaction propagation creating nested connections
@Transactional with REQUIRES_NEW propagation suspends the outer transaction and acquires a new connection:
@Transactional
public void processOrder(Order order) {
orderRepository.save(order);
auditService.logOrderCreated(order); // REQUIRES_NEW — acquires second connection
inventoryService.reserve(order); // uses outer transaction's connection
}
@Transactional(propagation = Propagation.REQUIRES_NEW)
public void logOrderCreated(Order order) {
auditRepository.save(new AuditEntry(order.getId(), "CREATED"));
// second connection held until this method returns
}
The outer transaction holds connection 1. REQUIRES_NEW acquires connection 2 for the audit log. Both connections are held simultaneously. If processOrder is called N times concurrently, up to 2N connections are needed — the pool exhausts at half the expected throughput.
REQUIRES_NEW is appropriate when the inner transaction must commit independently of the outer (audit logs that should persist even if the outer transaction rolls back). Use it deliberately, knowing it doubles connection consumption.
LazyInitializationException workarounds that cause leaks
A LazyInitializationException — accessing a lazy-loaded association outside a transaction — is often "fixed" by extending the transaction scope rather than loading the data correctly:
// "Fix" that extends transaction to cover view rendering
@Transactional // added to controller to avoid LazyInitializationException
public ResponseEntity<OrderResponse> getOrder(@PathVariable Long id) {
Order order = orderService.findById(id);
return ResponseEntity.ok(OrderResponse.from(order)); // mapping accesses lazy associations
}
The controller now holds a connection open while serializing the response — potentially for a long time if the serializer accesses multiple lazy associations or if the response is large. The correct fix is loading required associations in the service layer with JOIN FETCH or @EntityGraph, not extending the transaction to cover the controller.
Connection leak from exception in async context
Connections acquired in @Async methods or CompletableFuture chains that throw exceptions may not be released if the transaction management is misconfigured:
@Async
@Transactional // @Transactional on @Async requires careful configuration
public CompletableFuture<Void> asyncProcess(Long id) {
// If this throws and no transaction synchronization is registered,
// the connection may not be released
processRecord(id);
return CompletableFuture.completedFuture(null);
}
Spring's transaction synchronization is thread-bound — the connection is associated with the thread that started the transaction. @Async runs on a different thread. @Transactional + @Async requires the transaction to be managed correctly per-thread. Test this combination explicitly; don't assume it works.
Finding leaks with HikariCP's leak detection
leakDetectionThreshold is HikariCP's built-in leak detector. It logs a stack trace when a connection is held longer than the threshold:
spring:
datasource:
hikari:
leak-detection-threshold: 30000 # 30 seconds
When a connection is held for 30 seconds, HikariCP logs:
WARN c.z.h.p.ProxyLeakTask - Connection leak detection triggered for
com.example.OrderRepository.findById(), stack trace follows
at com.example.OrderService.processOrder(OrderService.java:47)
at com.example.OrderController.getOrder(OrderController.java:23)
...
The stack trace shows where the connection was acquired — directly identifying the code path holding it. Set the threshold to your expected maximum query duration plus reasonable overhead (10–30 seconds is typical). If queries legitimately take longer, set it higher — false positives on batch operations are noise.
This is the most useful single configuration for diagnosing connection issues. Enable it in all environments, including production.
Finding leaks with metrics
HikariCP exposes metrics via Micrometer (auto-configured with Spring Boot Actuator):
hikaricp.connections.active — connections currently checked out
hikaricp.connections.idle — connections available in pool
hikaricp.connections.pending — threads waiting for a connection
hikaricp.connections.timeout — total timeout events since startup
hikaricp.connections.acquire — time waiting to acquire (histogram)
A leak manifests as:
hikaricp.connections.activetrending upward over time without returning to baselinehikaricp.connections.idletrending toward zerohikaricp.connections.pendingnon-zero during periods of normal load
Alert on hikaricp.connections.timeout — any timeout event is a pool exhaustion symptom. Alert on hikaricp.connections.pending sustained above zero during expected load — it means the pool is undersized or connections are being held too long.
Grafana dashboard query for connection utilization:
hikaricp_connections_active{pool="HikariPool-1"}
/
hikaricp_connections_max{pool="HikariPool-1"}
Alert when this ratio exceeds 0.8 (80% pool utilization) — gives time to investigate before exhaustion.
The diagnostic sequence
When connection pool exhaustion is suspected:
-
Check metrics. Is
activecount growing? Isidlecount dropping? Ispendingnon-zero? -
Enable
leakDetectionThresholdif not already set. Reproduce the traffic pattern. Stack traces appear in logs within the threshold window. -
Check
REQUIRES_NEWusage. EveryREQUIRES_NEWin a hot path doubles connection consumption. Audit their necessity. -
Check
@Transactionalscope. Methods annotated with@Transactionalthat call external services (HTTP, message queues) hold connections during those calls. -
Check LazyInitializationException mitigations. Controller-level
@Transactionalto avoidLazyInitializationExceptionis a connection retention smell. -
Thread dump at peak active count.
jstack <pid>shows which threads are in active database operations. Threads sitting in non-database code while holding a transaction are the leak source.
Connection leaks are not self-healing. The pool exhausts gradually — the problem is always worse by the time it's noticed. Leak detection and pool metrics in production catch the pattern early, when the fix is a code change rather than an emergency restart.