Event-Driven vs Request-Driven Architecture — Which One to Pick and When
by Eric Hanson, Backend Developer at Clean Systems Consulting
The architecture that solves the wrong problem
Your product team wants the notification service, the analytics service, and the loyalty points service to all react when an order ships. In a request-driven model, your shipping service calls each of these downstream services in sequence after updating the shipment status. Adding a fourth consumer means modifying the shipping service. Someone proposes event-driven: the shipping service publishes a ShipmentDispatched event; any service that cares subscribes. The shipping service does not know or care who is listening.
This is a legitimate case for event-driven architecture. Now let me show you what it looks like when the same pattern is applied to the wrong problem.
Request-driven: the default worth defending
Request-driven architecture — a service makes an HTTP or gRPC call to another service and waits for a response — is the default for a reason. The execution model is comprehensible. A call stack has a start, an end, and a clear owner. When something goes wrong, you follow the call chain. Logs, traces, and metrics attach naturally to the request lifecycle.
This model works well for: operations where the initiating service needs the result, workflows with a clear orchestrator, systems where consistency matters more than independence, and teams that cannot yet afford the operational overhead of an event streaming platform.
# Request-driven payment flow — clear, traceable, consistent
User → API Gateway → Order Service → Payment Service → (returns result)
→ Inventory Service → (returns result)
↓
Render response
Every step has a defined success or failure. If the payment fails, the order is not created. Consistency is maintained transactionally. The trade-off is coupling: if the Payment Service is slow, the Order Service is slow. If the Inventory Service is down, order creation fails.
Event-driven: what it actually solves
Event-driven architecture (EDA) using an event streaming platform like Apache Kafka or AWS Kinesis solves two distinct problems:
Temporal decoupling: the producer does not need to know whether consumers are currently running. Events are durable. Consumers catch up when they come back online. This is qualitatively different from a message queue — Kafka retains the event log indefinitely (based on retention policy), meaning new consumers can replay historical events, not just future ones.
Fan-out without coupling: adding a new consumer does not require changing the producer. The shipping service publishes ShipmentDispatched once. Ten services can subscribe without the shipping team knowing or caring.
// Kafka producer — publishes once, any number of consumers react
@Service
public class ShipmentEventPublisher {
private final KafkaTemplate<String, ShipmentEvent> kafkaTemplate;
public void publishShipmentDispatched(Shipment shipment) {
var event = ShipmentDispatchedEvent.builder()
.shipmentId(shipment.getId())
.orderId(shipment.getOrderId())
.trackingNumber(shipment.getTrackingNumber())
.dispatchedAt(Instant.now())
.build();
// Partition key ensures order-level events are sequenced
kafkaTemplate.send("shipments.dispatched", shipment.getOrderId().toString(), event);
}
}
// Consumer A — notifications team owns this entirely
@KafkaListener(topics = "shipments.dispatched", groupId = "notifications-service")
public void onShipmentDispatched(ShipmentDispatchedEvent event) {
notificationService.sendShippingConfirmation(event.getOrderId());
}
// Consumer B — loyalty team owns this independently
@KafkaListener(topics = "shipments.dispatched", groupId = "loyalty-service")
public void onShipmentDispatched(ShipmentDispatchedEvent event) {
loyaltyService.awardShippingPoints(event.getOrderId());
}
Each consumer group maintains its own offset. They process independently, at their own pace, and can be redeployed without affecting the producer or other consumers.
The costs that advocates understate
Event schema evolution is harder than API versioning. REST APIs can version via the URL path. Event schemas in Kafka must be backward-compatible because consumers at different offsets may be reading old and new schema versions simultaneously. Apache Avro with a Schema Registry (Confluent Schema Registry) provides compatibility enforcement, but adds infrastructure and a learning curve. Getting schema evolution wrong silently breaks consumers.
Debugging is harder by design. A request-driven system has a synchronous call chain you can trace. An event-driven system has a timeline of events processed independently. Correlating a business failure — "the loyalty points never arrived for order 12345" — requires tracing an event through multiple consumer logs, matching by correlation ID, and understanding consumer lag. Distributed tracing with W3C TraceContext propagated through event headers helps, but it requires disciplined implementation.
Ordering guarantees are weaker than they appear. Kafka guarantees ordering within a partition. If you partition by order ID, events for the same order are ordered. But events across partitions have no ordering guarantee. If your consumer logic depends on events from different partitions being processed in global sequence, you have an architectural mismatch.
Eventual consistency changes your UI contract. In a request-driven model, the API confirms the action. In an event-driven model, the action triggers downstream processing that completes later. The UI must handle a state where the user's action was accepted but its downstream effects are pending. This requires explicit UX design — optimistic updates, polling, or real-time updates via WebSocket.
When to use each
Request-driven for: core transactional workflows, operations where the result is needed immediately, systems where strong consistency is required, teams without Kafka operational expertise.
Event-driven for: notification and analytics fan-out, audit logs and event sourcing, workflows where temporal decoupling between producer and consumers is a genuine requirement, systems where replay capability has business value.
The pattern I have seen work well at mid-size companies: request-driven for the primary transaction path, event-driven for everything that reacts to completed transactions. The shipping service updates the shipment record synchronously, then publishes the event. The core consistency is preserved transactionally; the downstream reactions are decoupled. Two patterns, used where each fits.