Event-Driven Design in Spring Boot — ApplicationEvents, Spring Integration, and When to Use a Message Broker

by Eric Hanson, Backend Developer at Clean Systems Consulting

The problem events solve

A service that places an order, sends a confirmation email, updates inventory, and notifies the shipping system in a single method is doing too much. When the email service is down, the order fails. When the inventory update is slow, the response is slow. When you want to add analytics tracking, you modify the order placement code.

Events decouple these concerns. The order placement publishes OrderPlacedEvent. Email, inventory, analytics, and shipping subscribe independently. The order placement succeeds if the event is published — what happens next is someone else's problem.

The question is not whether to use events, but which tier of eventing is appropriate for the coupling you need to break.

Tier 1: ApplicationEvent — in-process decoupling

Spring's ApplicationEvent mechanism is synchronous, in-process, and transactional-aware. Events are published within the same JVM, to listeners registered in the same Spring context.

Defining and publishing events:

public record OrderPlacedEvent(String orderId, String userId, Money total) {}

@Service
public class OrderService {

    private final ApplicationEventPublisher eventPublisher;

    public Order placeOrder(PlaceOrderRequest request) {
        Order order = createOrder(request);
        orderRepository.save(order);

        // Publish after save — listeners run synchronously in same transaction by default
        eventPublisher.publishEvent(new OrderPlacedEvent(
            order.getId(), order.getUserId(), order.getTotal()));

        return order;
    }
}

Listening:

@Component
public class OrderEventListener {

    @EventListener
    public void onOrderPlaced(OrderPlacedEvent event) {
        // Runs synchronously in the same thread as the publisher
        // If this throws, the publisher's transaction rolls back too
        notificationService.sendOrderConfirmation(event.orderId(), event.userId());
    }

    @EventListener
    @Async  // runs on a separate thread pool
    public void trackOrderAnalytics(OrderPlacedEvent event) {
        analyticsService.track("order.placed", Map.of(
            "orderId", event.orderId(),
            "total", event.total().amountInCents()
        ));
    }
}

@EventListener without @Async runs synchronously in the publisher's thread — if the listener throws, the exception propagates to the publisher. With @Async, the listener runs on a separate thread pool and exceptions don't propagate to the publisher.

Transaction-bound events — the critical feature:

@TransactionalEventListener defers event delivery until after the transaction commits:

@Component
public class OrderEventListener {

    @TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
    public void onOrderPlaced(OrderPlacedEvent event) {
        // Only runs if the transaction committed — not on rollback
        emailService.sendConfirmation(event.orderId());
    }

    @TransactionalEventListener(phase = TransactionPhase.AFTER_ROLLBACK)
    public void onOrderFailed(OrderPlacedEvent event) {
        log.warn("Order placement rolled back: {}", event.orderId());
        alertingService.notifyRollback(event.orderId());
    }
}

AFTER_COMMIT is the most important phase. Without it, a listener that fires during the transaction may act on data that's subsequently rolled back — sending a confirmation email for an order that never persisted. AFTER_COMMIT guarantees the listener fires only after successful database commit.

When ApplicationEvents are the right choice:

  • Same JVM, same application context
  • The listener is part of the same deployment unit as the publisher
  • Transactional event delivery is needed (fire after commit)
  • The event volume is moderate (not millions per second)
  • Delivery durability is not required (in-memory, lost on restart)

The limitation: ApplicationEvents are lost if the application restarts before listeners run. With @Async + @TransactionalEventListener, there's a window between commit and listener execution where a crash loses the event. For critical events (payment confirmations, order notifications), this is unacceptable.

Tier 2: Transactional outbox — durability without a broker

The transactional outbox pattern bridges the gap between ApplicationEvents (unreliable) and a full message broker (infrastructure overhead). Events are written to a database table in the same transaction as the domain change, then a separate process reads the outbox and delivers events:

@Entity
@Table(name = "outbox_events")
public class OutboxEvent {
    @Id private String id;
    private String aggregateType;
    private String aggregateId;
    private String eventType;
    private String payload;       // JSON
    private Instant createdAt;
    private Instant processedAt;  // null until delivered
}

@Service
public class OrderService {

    @Transactional
    public Order placeOrder(PlaceOrderRequest request) {
        Order order = createOrder(request);
        orderRepository.save(order);

        // Atomically write event to outbox — same transaction as the order
        outboxRepository.save(OutboxEvent.builder()
            .id(UUID.randomUUID().toString())
            .aggregateType("Order")
            .aggregateId(order.getId())
            .eventType("OrderPlaced")
            .payload(objectMapper.writeValueAsString(new OrderPlacedPayload(order)))
            .createdAt(Instant.now())
            .build());

        return order;
    }
}

A scheduled job (or a CDC tool like Debezium) reads unprocessed outbox events and delivers them:

@Component
public class OutboxProcessor {

    @Scheduled(fixedDelay = 1000)
    @Transactional
    public void processOutbox() {
        List<OutboxEvent> pending = outboxRepository.findUnprocessed(Limit.of(100));
        pending.forEach(event -> {
            try {
                deliverEvent(event);
                event.setProcessedAt(Instant.now());
            } catch (Exception e) {
                log.error("Failed to deliver event {}", event.getId(), e);
                // Will retry on next cycle
            }
        });
        outboxRepository.saveAll(pending);
    }
}

The event is guaranteed to be delivered at least once (barring permanent delivery failure) because the outbox record persists until delivery is confirmed. This is the pattern used before adding a message broker — it provides durability with only a database dependency.

Tier 3: Message broker — cross-service, durable, scalable

A message broker (RabbitMQ, Apache Kafka, AWS SQS/SNS, Google Pub/Sub) is the right choice when:

  • Events must cross service boundaries
  • Multiple independent consumer services receive the same event
  • Event volume exceeds what the outbox pattern handles efficiently
  • Consumer autoscaling is required
  • Events must be replayed (Kafka's log retention enables this)

RabbitMQ with Spring AMQP:

// Publisher
@Service
public class OrderEventPublisher {

    private final RabbitTemplate rabbitTemplate;

    public void publish(OrderPlacedEvent event) {
        rabbitTemplate.convertAndSend(
            "orders.exchange",
            "orders.placed",
            event
        );
    }
}

// Consumer — in a separate service
@Component
public class InventoryUpdateConsumer {

    @RabbitListener(queues = "inventory.order-placed")
    public void handleOrderPlaced(OrderPlacedEvent event) {
        inventoryService.reserve(event.orderId(), event.items());
    }
}

Kafka with Spring Kafka:

// Producer
@Service
public class OrderKafkaPublisher {

    private final KafkaTemplate<String, OrderPlacedEvent> kafkaTemplate;

    public void publish(OrderPlacedEvent event) {
        kafkaTemplate.send("orders.placed", event.orderId(), event);
        // Key = orderId — events for the same order go to the same partition
        // Preserves ordering within an order's lifecycle
    }
}

// Consumer
@Component
public class ShippingConsumer {

    @KafkaListener(topics = "orders.placed", groupId = "shipping-service")
    public void handleOrderPlaced(
            ConsumerRecord<String, OrderPlacedEvent> record,
            Acknowledgment ack) {
        try {
            shippingService.createShipment(record.value());
            ack.acknowledge(); // manual acknowledgment — commit only on success
        } catch (RetryableException e) {
            // don't acknowledge — will be retried
            throw e;
        } catch (NonRetryableException e) {
            log.error("Non-retryable error processing {}", record.key(), e);
            ack.acknowledge(); // acknowledge to skip — send to dead letter topic separately
            deadLetterPublisher.send("orders.placed.dlq", record.value());
        }
    }
}

Kafka's partition key ensures ordering within a partition. All events for orderId=123 go to the same partition and are processed in order by the same consumer instance. Without a partition key, events for the same order may be processed by different consumers in parallel — potentially out of order.

Consumer idempotency — the requirement you can't skip

Any message-driven consumer must be idempotent — processing the same message multiple times produces the same result as processing it once. At-least-once delivery (the default for most brokers) means duplicate messages are possible: network retries, consumer restarts, and broker acknowledgment failures all produce duplicates.

@RabbitListener(queues = "inventory.order-placed")
public void handleOrderPlaced(OrderPlacedEvent event) {
    // Idempotent check — skip if already processed
    if (inventoryReservationRepository.existsByOrderId(event.orderId())) {
        log.debug("Duplicate event for order {}, skipping", event.orderId());
        return;
    }

    inventoryService.reserve(event.orderId(), event.items());
    // The reserve operation should also be idempotent internally
}

The idempotency check and the business operation should be in the same transaction when possible — otherwise a crash between the check and the operation creates a window where the event appears unprocessed and is redelivered.

For operations that are inherently idempotent (setting a status, updating a timestamp), no check is needed. For operations that produce side effects (charging a payment, sending an email, adjusting inventory), idempotency must be explicit.

The tier decision

Use ApplicationEvent when the event is internal to one deployment unit, delivery doesn't need to survive application restarts, and the consumer is a different concern within the same service (audit logging, cache invalidation, statistics collection).

Use the outbox pattern when durability is required, a message broker is not yet justified, and events must survive application restarts. The outbox adds one database table and a polling job.

Use a message broker when multiple independent services consume the same event, consumer autoscaling is needed, or Kafka's log retention enables event replay that the use case requires.

The progression is: ApplicationEvent → outbox → broker. Don't jump to a broker for same-service decoupling where ApplicationEvent with @TransactionalEventListener is sufficient. Don't use ApplicationEvent for cross-service communication where a broker is required. The right tier depends on the durability and delivery guarantees the use case actually needs.

Scale Your Backend - Need an Experienced Backend Developer?

We provide backend engineers who join your team as contractors to help build, improve, and scale your backend systems.

We focus on clean backend design, clear documentation, and systems that remain reliable as products grow. Our goal is to strengthen your team and deliver backend systems that are easy to operate and maintain.

We work from our own development environments and support teams across US, EU, and APAC timezones. Our workflow emphasizes documentation and asynchronous collaboration to keep development efficient and focused.

  • Production Backend Experience. Experience building and maintaining backend systems, APIs, and databases used in production.
  • Scalable Architecture. Design backend systems that stay reliable as your product and traffic grow.
  • Contractor Friendly. Flexible engagement for short projects, long-term support, or extra help during releases.
  • Focus on Backend Reliability. Improve API performance, database stability, and overall backend reliability.
  • Documentation-Driven Development. Development guided by clear documentation so teams stay aligned and work efficiently.
  • Domain-Driven Design. Design backend systems around real business processes and product needs.

Tell us about your project

Our offices

  • Copenhagen
    1 Carlsberg Gate
    1260, København, Denmark
  • Magelang
    12 Jalan Bligo
    56485, Magelang, Indonesia

More articles

Why Most Software Problems Are Communication Problems

When software goes wrong, it’s rarely the code itself. Most problems start with unclear expectations, misaligned priorities, or missed context.

Read more

Fat Models, Skinny Controllers — and Why I Moved Beyond Both

The fat models, skinny controllers mantra fixed one problem and created another. Here is what the architecture actually looks like when you take it to its logical conclusion.

Read more

Service Communication in Spring Boot: REST vs Messaging

Choosing between synchronous REST and asynchronous messaging is not a matter of preference — it is a decision with direct consequences for availability, consistency, and operational complexity. Most systems need both, and the mistake is applying one where the other belongs.

Read more

The True Cost of Maintaining Software

Launching an app feels like crossing the finish line. In reality, it’s just the moment the meter really starts running.

Read more