Spring Boot Caching in Practice — @Cacheable, Cache Warming, and When Caching Makes Things Worse
by Eric Hanson, Backend Developer at Clean Systems Consulting
The caching abstraction — what @Cacheable actually does
@Cacheable intercepts a method call, checks a cache for the result under the method's key, and returns the cached value if found — bypassing the method body. On a cache miss, the method executes and its return value is stored in the cache for subsequent calls.
@Service
public class ProductService {
@Cacheable(value = "products", key = "#productId")
public Product findProduct(Long productId) {
return productRepository.findById(productId)
.orElseThrow(() -> new ProductNotFoundException(productId));
}
}
The first call with productId = 123 executes the method and caches the result under key products::123. Subsequent calls with the same productId return the cached Product without touching the database.
Spring Boot auto-configures a ConcurrentMapCacheManager (in-memory, no eviction) if no cache manager is configured and caching is enabled. For production, always configure an explicit cache manager with eviction policy and size limits.
Cache manager configuration
Caffeine for in-process caching:
@Configuration
@EnableCaching
public class CacheConfig {
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager manager = new CaffeineCacheManager();
manager.setCaffeine(Caffeine.newBuilder()
.maximumSize(10_000)
.expireAfterWrite(Duration.ofMinutes(10))
.recordStats()); // enables hit rate metrics
return manager;
}
}
recordStats() enables Caffeine's internal metrics — hit rate, miss rate, eviction count. These are critical for validating whether a cache is actually effective.
Per-cache configuration when different caches need different TTLs:
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager manager = new CaffeineCacheManager() {
@Override
protected Cache<Object, Object> createNativeCaffeineCache(String name) {
return switch (name) {
case "products" -> Caffeine.newBuilder()
.maximumSize(50_000).expireAfterWrite(Duration.ofHours(1)).build();
case "user-sessions" -> Caffeine.newBuilder()
.maximumSize(10_000).expireAfterWrite(Duration.ofMinutes(30)).build();
case "config" -> Caffeine.newBuilder()
.maximumSize(100).expireAfterWrite(Duration.ofDays(1)).build();
default -> Caffeine.newBuilder()
.maximumSize(1_000).expireAfterWrite(Duration.ofMinutes(5)).build();
};
}
};
return manager;
}
Redis for distributed caching:
spring:
cache:
type: redis
redis:
time-to-live: 600000 # 10 minutes in milliseconds
cache-null-values: false
data:
redis:
host: redis.internal
port: 6379
Redis cache is appropriate when multiple application instances must share the same cache — otherwise each instance has its own cache and cache invalidation requires hitting all instances. The tradeoff: network latency per cache lookup (typically 0.5–2ms for a local Redis) vs the database query cost it replaces.
Cache key design
The default cache key uses all method parameters. For a method with multiple parameters where only some determine the cached value:
// Default key uses both parameters — but page is not part of the cached value
@Cacheable("products")
public Page<Product> findProducts(String category, int page) { ... }
// Key: products::SimpleKey(category, page) — each page is a separate cache entry
// 100 pages × 20 categories = 2,000 cache entries for paginated data — probably wrong
// Explicit key — cache only by category
@Cacheable(value = "products", key = "#category")
public List<Product> findByCategory(String category) { ... }
Paginated results are generally not worth caching by page — too many cache entries, limited reuse. Cache the full unpaged result (if feasible) or cache at the data source level.
Composite keys:
@Cacheable(value = "prices", key = "#productId + '_' + #currency")
public Money getPrice(Long productId, String currency) { ... }
String concatenation for composite keys is fragile — productId=1, currency="2_USD" and productId=12, currency="USD" produce the same key 1_2_USD and 12_USD. Use a separator that can't appear in the values, or use SpEL's {#a, #b} syntax:
@Cacheable(value = "prices", key = "{ #productId, #currency }")
public Money getPrice(Long productId, String currency) { ... }
@CacheEvict and @CachePut — keeping the cache consistent
@CacheEvict removes entries when data changes:
@CacheEvict(value = "products", key = "#product.id")
public Product updateProduct(Product product) {
return productRepository.save(product);
}
// Evict all entries in the cache
@CacheEvict(value = "products", allEntries = true)
public void reloadProductCatalog() {
// bulk update
}
@CachePut updates the cache without preventing method execution — the method always runs and its result replaces the cache entry:
@CachePut(value = "products", key = "#result.id")
public Product updateProduct(Product product) {
return productRepository.save(product);
}
The practical difference: @CacheEvict removes the entry — the next read will miss and reload from the database. @CachePut keeps the entry current — no miss on the next read. @CachePut is appropriate when the update and the cached result are the same method — the update produces exactly what should be in the cache. @CacheEvict is safer when you can't guarantee the method's return value matches what other cache users expect.
Cache warming — pre-loading before traffic arrives
Cold caches — empty caches at startup — cause a burst of cache misses when the first requests arrive. For high-traffic applications, this cold start can overwhelm the database.
Warm the cache after application startup using ApplicationReadyEvent:
@Component
public class CacheWarmer {
private final ProductService productService;
private final ProductRepository productRepository;
@EventListener(ApplicationReadyEvent.class)
public void warmCache() {
log.info("Warming product cache...");
// Load frequently accessed products
productRepository.findTopSellers(1000)
.forEach(p -> productService.findProduct(p.getId()));
log.info("Cache warming complete");
}
}
Calling productService.findProduct() — the @Cacheable method — populates the cache. This is preferable to populating the cache directly because it uses the same code path as production reads.
For large caches, warm in batches with a brief pause to avoid overwhelming the database during startup:
@EventListener(ApplicationReadyEvent.class)
public void warmCache() {
List<Long> topProductIds = productRepository.findTopSellerIds(10_000);
Lists.partition(topProductIds, 100).forEach(batch -> {
batch.forEach(id -> productService.findProduct(id));
Uninterruptibles.sleepUninterruptibly(50, TimeUnit.MILLISECONDS);
});
}
The thundering herd — when cache misses pile up
When a cache entry expires and multiple threads simultaneously request it, they all miss and simultaneously execute the underlying method — all hit the database at once. For a popular cache entry backing an expensive query, this can spike database load.
Caffeine handles this with refreshAfterWrite combined with expireAfterWrite:
Caffeine.newBuilder()
.maximumSize(10_000)
.refreshAfterWrite(Duration.ofMinutes(5)) // refresh in background before expiry
.expireAfterWrite(Duration.ofMinutes(10)) // hard expiry if refresh fails
.build(key -> productRepository.findById(key).orElseThrow());
refreshAfterWrite triggers an async background refresh when an entry is accessed after the refresh window — the caller gets the stale value while the refresh runs, eliminating the thundering herd. expireAfterWrite is the hard expiry — if refresh fails repeatedly, the entry is eventually evicted.
For distributed caches (Redis), the pattern is a distributed lock on the missing key:
public Product findProduct(Long productId) {
String cacheKey = "product:" + productId;
Product cached = redisCache.get(cacheKey);
if (cached != null) return cached;
// Only one thread reloads — others wait
String lockKey = "lock:" + cacheKey;
try {
if (redisLock.acquire(lockKey, Duration.ofSeconds(5))) {
// Re-check after acquiring lock — another thread may have loaded
cached = redisCache.get(cacheKey);
if (cached != null) return cached;
Product product = productRepository.findById(productId).orElseThrow();
redisCache.set(cacheKey, product, Duration.ofMinutes(10));
return product;
}
} finally {
redisLock.release(lockKey);
}
// Lock timeout — fall through to database
return productRepository.findById(productId).orElseThrow();
}
When caching makes things worse
Caching mutable data with long TTLs. A product price cached for one hour means price changes take up to an hour to appear. For data where staleness causes business problems — pricing, inventory, user permissions — either shorten the TTL or use @CacheEvict on writes. A cache that serves stale data is not a performance improvement — it's a correctness problem with a performance side effect.
Caching in front of fast operations. Caching a method that takes 1ms to execute adds cache lookup overhead (often 0.5–2ms for Redis) without meaningful benefit. Profile before caching. If the method is already fast, caching slows it down.
Caching large objects in heap memory. A Caffeine cache holding 10,000 Order objects with 50 line items each may hold hundreds of megabytes of heap. This increases GC pressure and may cause GC pauses that offset the performance gain. Monitor heap usage after adding caching and check Caffeine's eviction stats — if entries are evicted before they're reused, the cache is too small to be effective.
Cache key collisions. A poorly designed cache key that maps different inputs to the same key returns wrong data — a correctness bug, not a performance bug. Test cache key uniqueness explicitly.
Missing @CacheEvict on writes. A cache that's populated on read but never evicted on write serves indefinitely stale data. Every @Cacheable should have a corresponding @CacheEvict on the methods that modify the cached data.
Validating cache effectiveness
Caffeine's stats, enabled with .recordStats():
// Via Micrometer — auto-registered when using Spring Boot + Actuator
// cache.gets{name="products", result="hit"} / cache.gets{name="products"} = hit rate
Or directly from the native cache:
@Autowired CacheManager cacheManager;
public CacheStats stats(String cacheName) {
CaffeineCache cache = (CaffeineCache) cacheManager.getCache(cacheName);
return cache.getNativeCache().stats();
}
A cache with a hit rate below 50% is not helping. Either the data doesn't repeat enough to benefit from caching, the TTL is too short, or the cache is too small. A high hit rate with unchanged p99 latency means the cache misses are still expensive enough to dominate. Both require investigation before concluding that caching is working.