Design LRU Cache (LLD)
1. Requirements
-
Functional
- Fixed capacity key-value store; on insert when full, evict the least recently used entry.
Get(key)returns value and marks key as most recently used;Put(key, value)upserts.- All operations should be O(1) average time.
-
Non-Functional
- Thread-safe usage under concurrent
Get/Putif exposed via API. - Memory bounded by capacity; no unbounded side structures.
- Thread-safe usage under concurrent
-
Assumptions / Out of Scope
- Keys and values are in-memory references; no persistence or TTL (TTL would be a separate concern).
2. Core Entities
| Entity | Responsibility | Key Attributes |
|---|---|---|
| LRUCache | Public API and capacity enforcement | capacity, index, orderList |
| ListNode | Doubly linked list node for recency | key, value, previous, next |
| DoublyLinkedList | Maintains MRU at head, LRU at tail | headSentinel, tailSentinel |
| KeyIndex | Maps key to list node for O(1) splice | map structure |
3. Class Diagram
Loading diagram…
4. State / Sequence Diagram (where relevant)
Loading diagram…
5. Design Patterns Applied
- No classic Gang-of-Four wrapper required — structure is hash map + intrusive list. Optional Decorator for metering hits/misses: Decorator pattern.
6. Implementation
Go
package lrucache
type ListNode struct {
Key string
Value any
Previous *ListNode
Next *ListNode
}
type LRUCache struct {
capacity int
keyToNode map[string]*ListNode
head, tail *ListNode
}
func NewLRUCache(capacity int) *LRUCache { /* ... */ }
func (cache *LRUCache) Get(key string) (any, bool) { /* ... */ }
func (cache *LRUCache) Put(key string, value any) { /* ... */ }JavaScript
class ListNode {
constructor(key, value) {
this.key = key;
this.value = value;
this.previous = null;
this.next = null;
}
}
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.keyToNode = new Map();
// sentinel head/tail and list helpers
}
get(key) { /* ... */ }
put(key, value) { /* ... */ }
}7. Concurrency / Thread Safety
- Collisions: Parallel
Putsame key; parallelGetwhile eviction runs. - Granularity: Single
sync.RWMutexonLRUCachefor Go; prefer writer lock on mutation, reader onGetif list updates are atomic under lock. - JavaScript: Serialize async callers with a mutex library or keep single-threaded and document.
8. Extensibility & Followups
See also
-
Sharded LRU by hash of key for lower lock contention.
-
Weighted capacity when values have variable byte size.
-
Time-to-live eviction layered on top of recency (separate index or lazy expiry on
Get).
Last updated on
Spotted something unclear or wrong on this page?