Spring Boot and Message Queues — RabbitMQ, Kafka, and Choosing Between Them
by Eric Hanson, Backend Developer at Clean Systems Consulting
The fundamental difference
RabbitMQ is a message broker. Producers send messages; the broker routes them to queues based on exchange rules; consumers receive messages from queues. Once a consumer acknowledges a message, it's deleted from the queue. RabbitMQ pushes messages to consumers.
Kafka is a distributed log. Producers append records to topics (ordered, partitioned logs). Consumers read records by maintaining an offset — their position in the log. Records are retained for a configured period regardless of consumption. Kafka doesn't push to consumers; consumers pull at their own pace.
This difference has practical consequences:
RabbitMQ: each message is processed by one consumer (in competing consumers). Once processed and acknowledged, it's gone. You can't replay past messages. Consumers are decoupled from the exact message; they receive what was queued for them.
Kafka: messages are retained and can be consumed multiple times by different consumer groups. A new service can be added and read the full history of events from the beginning of the topic. Consumers can rewind their offset and reprocess past messages.
Spring Boot with RabbitMQ
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
spring:
rabbitmq:
host: rabbitmq.internal
port: 5672
username: ${RABBITMQ_USERNAME}
password: ${RABBITMQ_PASSWORD}
virtual-host: /
listener:
simple:
acknowledge-mode: manual # manual ack — most reliable
prefetch: 10 # receive 10 messages at a time
concurrency: 3 # 3 consumer threads
max-concurrency: 10 # scale up to 10 under load
Declare exchanges, queues, and bindings:
@Configuration
public class RabbitMQConfig {
public static final String ORDER_EXCHANGE = "orders.exchange";
public static final String ORDER_QUEUE = "orders.processing";
public static final String ORDER_DLQ = "orders.processing.dlq";
public static final String ORDER_ROUTING_KEY = "orders.placed";
@Bean
public TopicExchange orderExchange() {
return new TopicExchange(ORDER_EXCHANGE, true, false);
// durable=true, autoDelete=false
}
@Bean
public Queue orderQueue() {
return QueueBuilder.durable(ORDER_QUEUE)
.withArgument("x-dead-letter-exchange", "") // default exchange
.withArgument("x-dead-letter-routing-key", ORDER_DLQ)
.withArgument("x-message-ttl", 3_600_000) // 1 hour TTL
.build();
}
@Bean
public Queue deadLetterQueue() {
return QueueBuilder.durable(ORDER_DLQ).build();
}
@Bean
public Binding orderBinding(Queue orderQueue, TopicExchange orderExchange) {
return BindingBuilder.bind(orderQueue)
.to(orderExchange)
.with(ORDER_ROUTING_KEY);
}
@Bean
public MessageConverter messageConverter(ObjectMapper objectMapper) {
return new Jackson2JsonMessageConverter(objectMapper);
}
@Bean
public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory,
MessageConverter messageConverter) {
RabbitTemplate template = new RabbitTemplate(connectionFactory);
template.setMessageConverter(messageConverter);
return template;
}
}
x-dead-letter-exchange and x-dead-letter-routing-key configure the dead letter queue — messages that fail all retry attempts are routed here rather than being lost. The DLQ holds failed messages for investigation and manual replay.
Publisher:
@Service
public class OrderEventPublisher {
private final RabbitTemplate rabbitTemplate;
public void publishOrderPlaced(Order order) {
OrderPlacedEvent event = new OrderPlacedEvent(order.getId(), order.getUserId(),
order.getTotal(), Instant.now());
rabbitTemplate.convertAndSend(
RabbitMQConfig.ORDER_EXCHANGE,
RabbitMQConfig.ORDER_ROUTING_KEY,
event,
message -> {
message.getMessageProperties().setMessageId(UUID.randomUUID().toString());
message.getMessageProperties().setContentType(MessageProperties.CONTENT_TYPE_JSON);
return message;
}
);
}
}
Consumer with manual acknowledgment:
@Component
public class OrderProcessingConsumer {
@RabbitListener(queues = RabbitMQConfig.ORDER_QUEUE)
public void processOrder(OrderPlacedEvent event, Channel channel,
@Header(AmqpHeaders.DELIVERY_TAG) long deliveryTag) throws IOException {
try {
inventoryService.reserve(event.orderId(), event.items());
shippingService.schedulePickup(event.orderId());
channel.basicAck(deliveryTag, false); // acknowledge success
} catch (RetryableException ex) {
log.warn("Retryable failure for order {}, requeueing", event.orderId(), ex);
channel.basicNack(deliveryTag, false, true); // nack and requeue
} catch (NonRetryableException ex) {
log.error("Non-retryable failure for order {}, sending to DLQ", event.orderId(), ex);
channel.basicNack(deliveryTag, false, false); // nack, don't requeue → DLQ
}
}
}
Manual acknowledgment gives precise control over message fate. basicAck removes the message. basicNack with requeue=true returns it to the queue for retry. basicNack with requeue=false routes it to the dead letter queue if configured, or discards it.
prefetch: 10 limits how many unacknowledged messages each consumer holds. Without prefetch limiting, a consumer could receive the entire queue into memory — the other consumers get nothing and the system loses load balancing.
Spring Boot with Kafka
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
spring:
kafka:
bootstrap-servers: kafka.internal:9092
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
acks: all # wait for all in-sync replicas
retries: 3
properties:
enable.idempotence: true # exactly-once producer semantics
consumer:
group-id: order-processing-service
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
auto-offset-reset: earliest # read from beginning for new consumer groups
enable-auto-commit: false # manual offset commit — most reliable
properties:
spring.json.trusted.packages: com.example.events
listener:
ack-mode: manual_immediate # commit offsets manually
concurrency: 3 # 3 consumer threads (≤ partition count)
acks: all and enable.idempotence: true ensure producers don't lose messages or produce duplicates under network failures. These are the correct production defaults.
Producer:
@Service
public class OrderKafkaPublisher {
private final KafkaTemplate<String, OrderPlacedEvent> kafkaTemplate;
public void publishOrderPlaced(Order order) {
OrderPlacedEvent event = new OrderPlacedEvent(order.getId(), order.getUserId(),
order.getTotal(), Instant.now());
ListenableFuture<SendResult<String, OrderPlacedEvent>> future =
kafkaTemplate.send("orders.placed", order.getId(), event);
// Key = orderId ensures events for the same order go to the same partition
future.addCallback(
result -> log.debug("Published order event: offset {}",
result.getRecordMetadata().offset()),
ex -> log.error("Failed to publish order event for {}", order.getId(), ex)
);
}
}
The partition key (order.getId()) ensures all events for the same order go to the same partition, maintaining ordering within an order's lifecycle. Without a key, Kafka distributes records round-robin across partitions — ordering is lost.
Consumer with manual offset commit:
@Component
public class OrderProcessingConsumer {
@KafkaListener(
topics = "orders.placed",
groupId = "order-processing-service",
containerFactory = "kafkaListenerContainerFactory"
)
public void processOrder(
ConsumerRecord<String, OrderPlacedEvent> record,
Acknowledgment ack) {
OrderPlacedEvent event = record.value();
log.debug("Processing order event: partition={}, offset={}",
record.partition(), record.offset());
try {
inventoryService.reserve(event.orderId(), event.items());
shippingService.schedulePickup(event.orderId());
ack.acknowledge(); // commit offset after successful processing
} catch (RetryableException ex) {
log.warn("Retryable failure for order {}", event.orderId(), ex);
// Don't acknowledge — the consumer will restart from this offset
throw ex;
} catch (NonRetryableException ex) {
log.error("Non-retryable failure for order {}, skipping", event.orderId(), ex);
// Acknowledge to skip this message — send to a dead letter topic manually
deadLetterPublisher.publish("orders.placed.dlq", record);
ack.acknowledge();
}
}
}
Not acknowledging a message means the consumer restarts from that offset on the next poll — effectively a retry. Without a retry limit, a poison message (one that always fails) blocks the partition indefinitely. Add a retry topic or dead letter topic strategy for non-retryable failures.
Dead letter topic with Spring Kafka:
@Bean
public DefaultErrorHandler errorHandler(KafkaTemplate<String, Object> template) {
DeadLetterPublishingRecoverer recoverer =
new DeadLetterPublishingRecoverer(template,
(record, ex) -> new TopicPartition(record.topic() + ".dlt",
record.partition()));
FixedBackOff backOff = new FixedBackOff(1000L, 3L); // 3 retries, 1s apart
return new DefaultErrorHandler(recoverer, backOff);
}
DeadLetterPublishingRecoverer publishes failed records to {topic}.dlt (dead letter topic) after exhausting retries. Spring Kafka handles the retry and DLT routing automatically — the consumer method stays clean.
Choosing between them
Choose RabbitMQ when:
- Work queues are the primary pattern — distribute tasks among workers, each task processed once
- Message routing logic is complex — topic exchanges, header-based routing, fanout to multiple queues
- Message ordering across consumers is not required
- Short message retention is acceptable — processed messages can be deleted
- Operational simplicity matters — RabbitMQ is easier to operate and monitor than Kafka
- Message TTL, priority queues, or delayed delivery are needed (RabbitMQ supports these natively)
Choose Kafka when:
- Event sourcing or event-driven architecture where the full history of events must be preserved
- Multiple independent consumer groups must process the same events differently
- Replay is required — replaying events to a new service or after a consumer bug is fixed
- Very high throughput — Kafka handles millions of messages per second at sustained rates
- Ordering within a partition (by key) is required — processing all events for a given entity in order
- Stream processing — Kafka Streams or Apache Flink processing in real time
The pattern that guides the decision: if you think of the messages as tasks to complete, use RabbitMQ. If you think of them as facts that happened, use Kafka.
An order to send an email is a task — one consumer sends it, it's done. An event that an order was placed is a fact — inventory reserves it, shipping schedules pickup, analytics records it, and a new compliance service added six months later can read all past order events from the beginning of the topic.
Both can be integrated in the same application for different use cases. A Spring Boot service can publish order events to Kafka (for event-driven processing) and send email tasks via RabbitMQ (for worker queue distribution). The choice is per messaging use case, not per application.