146. LRU Cache
At a Glance
- Topic: linked-list
- Pattern: Hash map + doubly linked list (recency ordering)
- Difficulty: Medium
- Companies: Amazon, Google, Meta, Microsoft, Uber
- Frequency: Very High
- LeetCode: 146
Problem (one-liner)
Implement Get(key) and Put(key,value) both average O(1), evicting the least-recently-used entry when size exceeds capacity.
Recognition Cues
- Recency order + eviction policy
O(1)requires hash map for lookup + linked structure for cheap promotion/eviction- Doubly linked list between sentinel head and tail is standard
Diagram
At-a-glance flow (replace with problem-specific Mermaid as you refine this note). camelCase node IDs; no spaces in IDs.
Loading diagram…
Approaches
- Brute force — array + linear scan for LRU —
O(n)per op. - Better —
map+ slice — removals stillO(n). - Optimal —
O(1)average — mapkey -> node, DLL nodes near head are MRU, near tail are LRU.
Optimal Solution
Go
type lruNode struct {
key, value int
prev, next *lruNode
}
type LRUCache struct {
capacity int
data map[int]*lruNode
head *lruNode
tail *lruNode
}
func Constructor(capacity int) LRUCache {
head := &lruNode{}
tail := &lruNode{}
head.next = tail
tail.prev = head
return LRUCache{
capacity: capacity,
data: make(map[int]*lruNode),
head: head,
tail: tail,
}
}
func (cache *LRUCache) add(node *lruNode) {
node.prev = cache.head
node.next = cache.head.next
cache.head.next.prev = node
cache.head.next = node
}
func (cache *LRUCache) remove(node *lruNode) {
node.prev.next = node.next
node.next.prev = node.prev
}
func (cache *LRUCache) Get(key int) int {
if node, exists := cache.data[key]; exists {
cache.remove(node)
cache.add(node)
return node.value
}
return -1
}
func (cache *LRUCache) Put(key int, value int) {
if node, exists := cache.data[key]; exists {
node.value = value
cache.remove(node)
cache.add(node)
return
}
node := &lruNode{key: key, value: value}
cache.data[key] = node
cache.add(node)
if len(cache.data) > cache.capacity {
lru := cache.tail.prev
cache.remove(lru)
delete(cache.data, lru.key)
}
}JavaScript
class LRUNode {
constructor(key = 0, value = 0) {
this.key = key;
this.value = value;
this.prev = null;
this.next = null;
}
}
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.data = new Map();
this.head = new LRUNode();
this.tail = new LRUNode();
this.head.next = this.tail;
this.tail.prev = this.head;
}
add(node) {
node.prev = this.head;
node.next = this.head.next;
this.head.next.prev = node;
this.head.next = node;
}
remove(node) {
node.prev.next = node.next;
node.next.prev = node.prev;
}
get(key) {
if (!this.data.has(key)) {
return -1;
}
const node = this.data.get(key);
this.remove(node);
this.add(node);
return node.value;
}
put(key, value) {
if (this.data.has(key)) {
const node = this.data.get(key);
node.value = value;
this.remove(node);
this.add(node);
return;
}
const node = new LRUNode(key, value);
this.data.set(key, node);
this.add(node);
if (this.data.size > this.capacity) {
const lru = this.tail.prev;
this.remove(lru);
this.data.delete(lru.key);
}
}
}Walkthrough
Capacity 2: put(1,1), put(2,2), get(1) promotes 1, put(3,3) evicts key 2 at LRU tail side.
Invariant: head.next is MRU; tail.prev is LRU candidate.
Edge Cases
- Capacity
1— every new key evicts previous. - Update existing key — counts as use; move to front.
- Repeated gets — refresh recency each time.
Pitfalls
- Singly linked list — cannot remove LRU in
O(1)without predecessor pointer. - Forgetting to delete map entry on eviction — memory leak in long runs.
Similar Problems
- 138. Copy List with Random Pointer — another hash-driven structure on linked nodes.
- 21. Merge Two Sorted Lists — lightweight relinking with sentinels.
- 2. Add Two Numbers — pointer surgery with a dummy “front”.
Variants
- LFU with O(1) — two maps + doubly linked buckets (classic Hard follow-on).
- TTL eviction — wrap nodes with expiry timestamps and lazy purge.
Mind-Map Tags
#design #hashmap #doubly-linked-list #eviction #recency
Last updated on
Spotted something unclear or wrong on this page?