Spring Boot and Database Connection Leaks — How They Happen and How to Find Them

by Eric Hanson, Backend Developer at Clean Systems Consulting

What a connection leak actually is

A connection leak occurs when a database connection is acquired from the pool and never returned. The connection remains allocated — it's not available for other requests — but no useful work is being done on it.

HikariCP tracks connection state: idle (in pool, available), active (checked out, in use), and pending (waiting for a connection). A leak manifests as active count growing steadily while idle count drops toward zero. When the pool is fully active and new requests can't acquire a connection within connectionTimeout, they fail with:

Unable to acquire JDBC Connection
java.sql.SQLTransactionRollbackException: Connection is not available, request timed out after 30000ms

By this point, every new request that needs the database is failing. The service is effectively down for database operations.

How connection leaks form in Spring Boot

Unclosed connections in manual JDBC

The most obvious source — JDBC connections obtained manually and not closed:

// Leak — connection never returned to pool on exception
public List<Order> findOrders(String customerId) {
    Connection conn = dataSource.getConnection();
    PreparedStatement ps = conn.prepareStatement("SELECT * FROM orders WHERE customer_id = ?");
    ps.setString(1, customerId);
    ResultSet rs = ps.executeQuery();
    List<Order> orders = mapResults(rs); // if this throws, conn is never closed
    conn.close(); // only reached if no exception
    return orders;
}

// Fixed — try-with-resources
public List<Order> findOrders(String customerId) {
    try (Connection conn = dataSource.getConnection();
         PreparedStatement ps = conn.prepareStatement("SELECT * FROM orders WHERE customer_id = ?")) {
        ps.setString(1, customerId);
        try (ResultSet rs = ps.executeQuery()) {
            return mapResults(rs);
        }
    }
}

try-with-resources closes ResultSet, PreparedStatement, and Connection in reverse order, including on exceptions. This is the correct pattern for manual JDBC. JdbcTemplate handles this internally — using it eliminates this entire category of leak.

@Transactional scope and unexpected connection retention

@Transactional acquires a connection when the transaction starts (the first database operation within the transaction, not when the method is entered) and releases it when the transaction commits or rolls back. Code that holds a transaction open while doing non-database work retains the connection for the entire duration:

@Transactional
public void processOrder(Long orderId) {
    Order order = orderRepository.findById(orderId).orElseThrow(); // connection acquired here
    
    // 5-second external HTTP call — connection held while waiting
    PaymentResult payment = paymentGateway.charge(order.getTotal()); // holding connection for 5s
    
    order.setPaymentId(payment.getId());
    orderRepository.save(order); // connection still held
} // connection released here

During the 5-second HTTP call, the connection is in the pool's "active" set — unavailable to other requests. With a pool of 20 connections and 20 concurrent requests making this call, the pool exhausts in seconds.

The fix: perform external calls outside the transaction boundary:

public void processOrder(Long orderId) {
    // Load data in a short transaction
    Order order = loadOrder(orderId);
    
    // External call outside any transaction — no connection held
    PaymentResult payment = paymentGateway.charge(order.getTotal());
    
    // Short transaction for the write
    updateOrderPayment(orderId, payment.getId());
}

@Transactional(readOnly = true)
private Order loadOrder(Long orderId) {
    return orderRepository.findById(orderId).orElseThrow();
}

@Transactional
private void updateOrderPayment(Long orderId, String paymentId) {
    Order order = orderRepository.findById(orderId).orElseThrow();
    order.setPaymentId(paymentId);
    orderRepository.save(order);
}

Two short transactions with no connection held during the external call. The connection pool handles many more concurrent requests with the same pool size.

Transaction propagation creating nested connections

@Transactional with REQUIRES_NEW propagation suspends the outer transaction and acquires a new connection:

@Transactional
public void processOrder(Order order) {
    orderRepository.save(order);
    auditService.logOrderCreated(order); // REQUIRES_NEW — acquires second connection
    inventoryService.reserve(order);     // uses outer transaction's connection
}

@Transactional(propagation = Propagation.REQUIRES_NEW)
public void logOrderCreated(Order order) {
    auditRepository.save(new AuditEntry(order.getId(), "CREATED"));
    // second connection held until this method returns
}

The outer transaction holds connection 1. REQUIRES_NEW acquires connection 2 for the audit log. Both connections are held simultaneously. If processOrder is called N times concurrently, up to 2N connections are needed — the pool exhausts at half the expected throughput.

REQUIRES_NEW is appropriate when the inner transaction must commit independently of the outer (audit logs that should persist even if the outer transaction rolls back). Use it deliberately, knowing it doubles connection consumption.

LazyInitializationException workarounds that cause leaks

A LazyInitializationException — accessing a lazy-loaded association outside a transaction — is often "fixed" by extending the transaction scope rather than loading the data correctly:

// "Fix" that extends transaction to cover view rendering
@Transactional  // added to controller to avoid LazyInitializationException
public ResponseEntity<OrderResponse> getOrder(@PathVariable Long id) {
    Order order = orderService.findById(id);
    return ResponseEntity.ok(OrderResponse.from(order)); // mapping accesses lazy associations
}

The controller now holds a connection open while serializing the response — potentially for a long time if the serializer accesses multiple lazy associations or if the response is large. The correct fix is loading required associations in the service layer with JOIN FETCH or @EntityGraph, not extending the transaction to cover the controller.

Connection leak from exception in async context

Connections acquired in @Async methods or CompletableFuture chains that throw exceptions may not be released if the transaction management is misconfigured:

@Async
@Transactional  // @Transactional on @Async requires careful configuration
public CompletableFuture<Void> asyncProcess(Long id) {
    // If this throws and no transaction synchronization is registered,
    // the connection may not be released
    processRecord(id);
    return CompletableFuture.completedFuture(null);
}

Spring's transaction synchronization is thread-bound — the connection is associated with the thread that started the transaction. @Async runs on a different thread. @Transactional + @Async requires the transaction to be managed correctly per-thread. Test this combination explicitly; don't assume it works.

Finding leaks with HikariCP's leak detection

leakDetectionThreshold is HikariCP's built-in leak detector. It logs a stack trace when a connection is held longer than the threshold:

spring:
  datasource:
    hikari:
      leak-detection-threshold: 30000  # 30 seconds

When a connection is held for 30 seconds, HikariCP logs:

WARN  c.z.h.p.ProxyLeakTask - Connection leak detection triggered for
  com.example.OrderRepository.findById(), stack trace follows
  at com.example.OrderService.processOrder(OrderService.java:47)
  at com.example.OrderController.getOrder(OrderController.java:23)
  ...

The stack trace shows where the connection was acquired — directly identifying the code path holding it. Set the threshold to your expected maximum query duration plus reasonable overhead (10–30 seconds is typical). If queries legitimately take longer, set it higher — false positives on batch operations are noise.

This is the most useful single configuration for diagnosing connection issues. Enable it in all environments, including production.

Finding leaks with metrics

HikariCP exposes metrics via Micrometer (auto-configured with Spring Boot Actuator):

hikaricp.connections.active     — connections currently checked out
hikaricp.connections.idle       — connections available in pool
hikaricp.connections.pending    — threads waiting for a connection
hikaricp.connections.timeout    — total timeout events since startup
hikaricp.connections.acquire    — time waiting to acquire (histogram)

A leak manifests as:

  • hikaricp.connections.active trending upward over time without returning to baseline
  • hikaricp.connections.idle trending toward zero
  • hikaricp.connections.pending non-zero during periods of normal load

Alert on hikaricp.connections.timeout — any timeout event is a pool exhaustion symptom. Alert on hikaricp.connections.pending sustained above zero during expected load — it means the pool is undersized or connections are being held too long.

Grafana dashboard query for connection utilization:

hikaricp_connections_active{pool="HikariPool-1"} 
  / 
hikaricp_connections_max{pool="HikariPool-1"}

Alert when this ratio exceeds 0.8 (80% pool utilization) — gives time to investigate before exhaustion.

The diagnostic sequence

When connection pool exhaustion is suspected:

  1. Check metrics. Is active count growing? Is idle count dropping? Is pending non-zero?

  2. Enable leakDetectionThreshold if not already set. Reproduce the traffic pattern. Stack traces appear in logs within the threshold window.

  3. Check REQUIRES_NEW usage. Every REQUIRES_NEW in a hot path doubles connection consumption. Audit their necessity.

  4. Check @Transactional scope. Methods annotated with @Transactional that call external services (HTTP, message queues) hold connections during those calls.

  5. Check LazyInitializationException mitigations. Controller-level @Transactional to avoid LazyInitializationException is a connection retention smell.

  6. Thread dump at peak active count. jstack <pid> shows which threads are in active database operations. Threads sitting in non-database code while holding a transaction are the leak source.

Connection leaks are not self-healing. The pool exhausts gradually — the problem is always worse by the time it's noticed. Leak detection and pool metrics in production catch the pattern early, when the fix is a code change rather than an emergency restart.

Scale Your Backend - Need an Experienced Backend Developer?

We provide backend engineers who join your team as contractors to help build, improve, and scale your backend systems.

We focus on clean backend design, clear documentation, and systems that remain reliable as products grow. Our goal is to strengthen your team and deliver backend systems that are easy to operate and maintain.

We work from our own development environments and support teams across US, EU, and APAC timezones. Our workflow emphasizes documentation and asynchronous collaboration to keep development efficient and focused.

  • Production Backend Experience. Experience building and maintaining backend systems, APIs, and databases used in production.
  • Scalable Architecture. Design backend systems that stay reliable as your product and traffic grow.
  • Contractor Friendly. Flexible engagement for short projects, long-term support, or extra help during releases.
  • Focus on Backend Reliability. Improve API performance, database stability, and overall backend reliability.
  • Documentation-Driven Development. Development guided by clear documentation so teams stay aligned and work efficiently.
  • Domain-Driven Design. Design backend systems around real business processes and product needs.

Tell us about your project

Our offices

  • Copenhagen
    1 Carlsberg Gate
    1260, København, Denmark
  • Magelang
    12 Jalan Bligo
    56485, Magelang, Indonesia

More articles

The Global Backend Developer Shortage Is Real — Here Is the Async Solution That Actually Works

Every city has its own version of the same backend hiring problem. The solution that works isn't about finding a better local market — it's about changing how the work gets done.

Read more

How I Structure a Rails App Before Writing a Single Line of Business Logic

The decisions you make in the first hour of a Rails project determine how painful the next two years will be. Here is the setup I reach for before touching application logic.

Read more

Memoization in Ruby — Patterns I Use Every Day

The ||= idiom covers 80% of memoization cases, but the other 20% — falsy values, arguments, thread safety, invalidation — is where the real decisions live.

Read more

How to Spot a Client Who Will Never Pay You on Time

Not every client respects deadlines. Some will delay payments endlessly, and spotting them early saves headaches.

Read more