Memoization in Ruby — Patterns I Use Every Day
by Eric Hanson, Backend Developer at Clean Systems Consulting
The ||= trap everyone hits once
The standard Ruby memoization pattern is two characters:
def config
@config ||= load_config_from_disk
end
||= assigns only if the left side is nil or false. That's the trap. If load_config_from_disk can legitimately return nil or false, this recalculates on every call. The assignment never sticks.
This comes up more than you'd expect: feature flag checks that return false, database lookups that return nil for missing records, parsers that return nil on empty input. If the method can return a falsy value, ||= is the wrong tool.
The fix is an explicit nil sentinel check:
def config
return @config if defined?(@config)
@config = load_config_from_disk
end
defined?(@config) returns nil if the variable has never been assigned, and a non-nil string ("instance-variable") if it has — even if its value is nil or false. This is the correct general-purpose memoization pattern. ||= is a shortcut that's only safe when you know the computed value is always truthy.
Memoizing with arguments
||= doesn't compose with arguments at all. The moment a method takes a parameter, you need a hash:
def user_permissions(user_id)
@user_permissions ||= {}
@user_permissions[user_id] ||= fetch_permissions(user_id)
end
The outer ||= initializes the cache hash once. The inner ||= is fine here if fetch_permissions always returns a truthy value (an array, even if empty). If it can return nil, apply the same defined? logic — but keyed:
def user_permissions(user_id)
@user_permissions ||= {}
unless @user_permissions.key?(user_id)
@user_permissions[user_id] = fetch_permissions(user_id)
end
@user_permissions[user_id]
end
Hash#key? is the argument-memoization equivalent of defined? — it distinguishes between "not yet computed" and "computed and nil."
For multi-argument methods, use an array key:
def exchange_rate(from, to)
@exchange_rates ||= {}
@exchange_rates[[from, to]] ||= fetch_rate(from, to)
end
Arrays hash by value in Ruby, so [:usd, :eur] is a stable, consistent key. This is fine for small argument spaces. For large or unbounded ones, consider whether you actually want a process-level cache at all versus something with eviction (more on that shortly).
Thread safety
Instance-level memoization in a single-threaded Rails request is safe because each request has its own object. Class-level or module-level memoization — caching on self in a class method — is not:
# Not thread-safe
def self.schema
@schema ||= load_schema
end
Two threads can both evaluate @schema as nil, both call load_schema, and race to assign. In most cases the result is just redundant work. In cases where the initialization has side effects — opening a file handle, registering a callback — you get duplicates.
The standard fix is Mutex:
SCHEMA_LOCK = Mutex.new
def self.schema
SCHEMA_LOCK.synchronize { @schema ||= load_schema }
end
A subtlety: putting the entire ||= inside synchronize means every call acquires the lock, even after @schema is set. For hot paths, double-checked locking avoids that overhead:
def self.schema
return @schema if @schema
SCHEMA_LOCK.synchronize { @schema ||= load_schema }
end
The outer check is intentionally unsynchronized. In MRI Ruby this is safe because the GVL (Global VM Lock) makes simple reads atomic. On JRuby or TruffleRuby, you'd want a read-write lock instead. Know your runtime before relying on this pattern.
Class-level caches with expiry
Sometimes you want memoization that resets periodically — rate limit counters, cached API responses, configuration that can hot-reload. A lightweight approach without pulling in a full caching layer:
class RateLimitCache
TTL = 60 # seconds
def initialize
@store = {}
@expires_at = {}
end
def fetch(key, &block)
if !@store.key?(key) || Time.now.to_i > @expires_at[key]
@store[key] = block.call
@expires_at[key] = Time.now.to_i + TTL
end
@store[key]
end
end
This is not a replacement for Redis or Memcached — it's process-local, has no eviction beyond TTL, and resets on restart. But it eliminates repeated work within a single process lifetime without external dependencies, which is the right call for configuration or computed constants that are cheap to regenerate.
For anything that needs to survive restarts, be shared across processes, or hold significant volume, use a proper cache store. Rails' ActiveSupport::Cache::MemoryStore or RedisCacheStore give you the same fetch-with-block interface with real eviction semantics.
Memoization in ActiveRecord models
Rails developers reach for memoization in model methods constantly, and it's usually fine — with one specific exception. Memoized values in model instances survive for the lifetime of the object. In background jobs or batch scripts where you load records once and mutate them in a loop, stale memoized values are a frequent source of bugs:
user = User.find(id)
user.expensive_computed_status # memoized
user.update!(role: "admin")
user.expensive_computed_status # returns stale memoized value
If the memoized method depends on attributes that change, you need to invalidate explicitly. The blunt approach:
def reset_memoization
@expensive_computed_status = nil
# or use remove_instance_variable for the defined? pattern
remove_instance_variable(:@expensive_computed_status) if defined?(@expensive_computed_status)
end
The cleaner architectural approach: keep memoization on methods that depend only on immutable or constructor-set state. If a method's result can change during the object's lifetime, it shouldn't be memoized without an explicit invalidation strategy.
The practical hierarchy
For methods that always return truthy, computed once per object lifetime: ||=.
For methods that can return nil or false: defined? guard.
For methods with arguments: hash cache keyed by argument, with Hash#key? when nil is a valid return.
For class-level shared state in a multi-threaded runtime: Mutex, with double-checked locking if the path is hot.
For cached values that need expiry: a simple TTL wrapper or ActiveSupport::Cache.
The one thing these patterns have in common: they're all just instance variables. Ruby's memoization story doesn't require a library. The complexity comes from the edge cases — falsy returns, concurrency, invalidation — not from the mechanism itself.