Asynchronous Java With CompletableFuture — Patterns That Stay Readable
by Eric Hanson, Backend Developer at Clean Systems Consulting
What CompletableFuture solves and what it doesn't
Future<T> from Java 5 represented an asynchronous result but provided no composition — you could only block and wait. CompletableFuture<T> adds non-blocking composition: chain transformations, combine multiple futures, handle errors without blocking, and attach callbacks that run when a value becomes available.
What it doesn't solve: CPU-bound parallelism (use ForkJoinPool), backpressure (use reactive streams), or structured concurrency (use Java 21's StructuredTaskScope). CompletableFuture is the right tool for I/O pipelines where you want to compose asynchronous operations without blocking threads.
The core pipeline methods
Three methods cover most composition needs:
CompletableFuture<Order> pipeline = CompletableFuture
.supplyAsync(() -> fetchOrder(orderId), fetchExecutor) // start with a value
.thenApply(order -> enrich(order)) // transform — same thread
.thenApplyAsync(order -> validate(order), validationExecutor) // transform — different executor
.thenCompose(order -> chargePayment(order)); // flat-map — returns CF
thenApply transforms the value synchronously on the completing thread. Use it for cheap, non-blocking transformations.
thenApplyAsync runs the transformation on the specified executor (or ForkJoinPool.commonPool() if omitted). Use it for I/O or CPU-bound transformations that shouldn't run on the completing thread.
thenCompose flat-maps — the function returns a CompletableFuture<U> rather than a U. This is the critical distinction for chaining async operations. Using thenApply when the function returns a CompletableFuture produces a CompletableFuture<CompletableFuture<U>> — nested futures that require an extra join() to unwrap.
// Wrong — produces CompletableFuture<CompletableFuture<PaymentResult>>
.thenApply(order -> chargePaymentAsync(order))
// Correct — produces CompletableFuture<PaymentResult>
.thenCompose(order -> chargePaymentAsync(order))
Error handling — three methods with different semantics
CompletableFuture has three error handling methods that are superficially similar but behave differently:
exceptionally — handles exceptions and provides a fallback value. The downstream pipeline continues with the fallback:
CompletableFuture<Order> result = fetchOrder(orderId)
.exceptionally(ex -> {
log.warn("Fetch failed, using cached order", ex);
return cachedOrder(orderId); // fallback value
});
exceptionally only runs if the stage completed exceptionally. If it completed normally, the function is skipped.
handle — runs regardless of success or failure, receives both the value and the exception (one will be null):
CompletableFuture<ProcessingResult> result = fetchOrder(orderId)
.handle((order, ex) -> {
if (ex != null) {
return ProcessingResult.failed(ex.getMessage());
}
return ProcessingResult.success(process(order));
});
handle is the right tool when you want to transform both the success and failure cases into a uniform result type — common at API boundaries where you need to convert internal exceptions to response objects.
whenComplete — runs a side effect regardless of outcome, but does not transform the value. The original value or exception continues downstream:
CompletableFuture<Order> result = fetchOrder(orderId)
.whenComplete((order, ex) -> {
if (ex != null) {
metrics.incrementFailureCount();
} else {
metrics.incrementSuccessCount();
}
// original result continues downstream unchanged
});
whenComplete is for logging, metrics, and cleanup — it observes the result without changing it.
The failure mode to avoid: using exceptionally to log and re-throw, then adding another exceptionally downstream expecting it to catch the re-thrown exception. exceptionally returns a new CompletableFuture with the fallback value — if the fallback function itself throws, that exception propagates, but a throw inside exceptionally doesn't automatically reach the next exceptionally in the chain unless the stages are correctly wired.
Combining multiple futures
thenCombine — run two futures concurrently and combine their results when both complete:
CompletableFuture<User> userFuture = fetchUser(userId);
CompletableFuture<Inventory> invFuture = fetchInventory(productId);
CompletableFuture<OrderRequest> combined = userFuture.thenCombine(
invFuture,
(user, inventory) -> new OrderRequest(user, inventory)
);
Both futures run concurrently. thenCombine doesn't start invFuture — it's already running. This is a pattern detail worth emphasizing: CompletableFuture.supplyAsync() starts executing immediately when called. Combining futures is about joining results, not sequencing starts.
allOf — wait for multiple futures to complete:
List<CompletableFuture<Void>> notifications = users.stream()
.map(user -> CompletableFuture.runAsync(() -> notify(user), notifyExecutor))
.collect(Collectors.toList());
CompletableFuture<Void> allNotifications = CompletableFuture.allOf(
notifications.toArray(new CompletableFuture[0])
);
allNotifications.join(); // blocks until all complete or any fail
allOf completes when all futures complete. If any future completes exceptionally, allOf completes exceptionally with that exception — but the other futures are not cancelled. They continue running.
For collecting results from allOf (since it returns CompletableFuture<Void>, not the values):
CompletableFuture<List<Result>> allResults = CompletableFuture.allOf(
futures.toArray(new CompletableFuture[0])
).thenApply(v ->
futures.stream()
.map(CompletableFuture::join) // safe — all futures are already complete
.collect(Collectors.toList())
);
anyOf — complete when the first future completes. Returns the value of the first completing future as Object — you lose type safety. Rarely the right tool; consider CompletableFuture.anyOf only for speculative execution where you send the same request to multiple sources and take the first response.
Timeout handling
CompletableFuture has no built-in timeout by default. A future waiting on a hung external service waits indefinitely. Java 9 added orTimeout and completeOnTimeout:
CompletableFuture<Order> result = fetchOrder(orderId)
.orTimeout(5, TimeUnit.SECONDS);
// Completes exceptionally with TimeoutException if not done in 5 seconds
CompletableFuture<Order> withFallback = fetchOrder(orderId)
.completeOnTimeout(cachedOrder(orderId), 5, TimeUnit.SECONDS);
// Completes with the fallback value if not done in 5 seconds
orTimeout is for operations where timeout means failure. completeOnTimeout is for operations where a stale fallback is acceptable. Both are non-blocking — they use a scheduled executor internally without tying up a thread.
For Java 8 compatibility, timeout requires a ScheduledExecutorService and manual completion:
private static <T> CompletableFuture<T> withTimeout(
CompletableFuture<T> future, long timeout, TimeUnit unit) {
CompletableFuture<T> timeoutFuture = new CompletableFuture<>();
scheduler.schedule(
() -> timeoutFuture.completeExceptionally(new TimeoutException()),
timeout, unit
);
return future.applyToEither(timeoutFuture, Function.identity());
}
The executor trap
Every *Async method has an overload that accepts an Executor. Without it, the method uses ForkJoinPool.commonPool(). The common pool is shared across the JVM — all CompletableFuture usage, all parallel streams, any library using ForkJoinPool.commonPool() competes for the same threads.
For I/O-heavy pipelines, blocking in the common pool starves CPU-bound work. For latency-sensitive paths, competition with other workloads adds unpredictable delays.
Always provide a named executor for production pipelines:
private static final ExecutorService IO_EXECUTOR = Executors.newFixedThreadPool(
50, r -> new Thread(r, "async-io-" + threadCount.incrementAndGet())
);
CompletableFuture.supplyAsync(() -> fetchOrder(id), IO_EXECUTOR)
.thenApplyAsync(order -> enrich(order), IO_EXECUTOR)
.thenComposeAsync(order -> chargeAsync(order), PAYMENT_EXECUTOR);
Separate executors for separate concerns — I/O and payment processing have different concurrency requirements and should not compete for the same threads.
Exception wrapping and unwrapping
Exceptions thrown inside CompletableFuture stages are wrapped in CompletionException. When you call join() or get(), you receive the wrapper, not the original:
try {
result.join();
} catch (CompletionException e) {
Throwable cause = e.getCause(); // the actual exception
if (cause instanceof OrderNotFoundException) {
// handle specifically
}
throw new RuntimeException(cause); // unwrap for callers
}
get() wraps checked exceptions in ExecutionException instead. join() wraps in CompletionException (unchecked). join() is usually cleaner in non-blocking pipelines.
For consistent exception handling in handle and exceptionally, the exception arrives already unwrapped as its original type — you don't need to unwrap from CompletionException there:
.handle((result, ex) -> {
if (ex instanceof OrderNotFoundException) { // direct type check — no unwrapping needed
return fallback();
}
// ...
})
When CompletableFuture becomes unwieldy
The patterns above work cleanly for linear pipelines and simple fan-out. Three signs that CompletableFuture has reached its limits:
Backpressure. If the producer generates tasks faster than the consumer processes them, CompletableFuture has no built-in mechanism to slow the producer. Work queues fill, memory grows. Reactive streams (Project Reactor, RxJava) provide backpressure by design.
Dynamic fan-out with error isolation. When you need to fan out to a variable number of concurrent operations, isolate failures per branch, and collect partial results — the allOf + stream pattern works but becomes verbose. Java 21's StructuredTaskScope handles this more cleanly.
More than three or four composed stages. A pipeline of eight thenCompose calls is difficult to read and debug. At that point, a sequential method with join() calls inside a virtual thread is often clearer — virtual threads make blocking cheap, eliminating the original motivation for async composition in I/O-bound code.
The appropriate complexity budget for CompletableFuture: two to four stages, clear error handling at each boundary, named executors throughout. Beyond that, evaluate whether virtual threads, reactive streams, or structured concurrency better fit the problem.