THN Interview Prep

Design LRU Cache (LLD)

1. Requirements

  • Functional

    • Fixed capacity key-value store; on insert when full, evict the least recently used entry.
    • Get(key) returns value and marks key as most recently used; Put(key, value) upserts.
    • All operations should be O(1) average time.
  • Non-Functional

    • Thread-safe usage under concurrent Get/Put if exposed via API.
    • Memory bounded by capacity; no unbounded side structures.
  • Assumptions / Out of Scope

    • Keys and values are in-memory references; no persistence or TTL (TTL would be a separate concern).

2. Core Entities

EntityResponsibilityKey Attributes
LRUCachePublic API and capacity enforcementcapacity, index, orderList
ListNodeDoubly linked list node for recencykey, value, previous, next
DoublyLinkedListMaintains MRU at head, LRU at tailheadSentinel, tailSentinel
KeyIndexMaps key to list node for O(1) splicemap structure

3. Class Diagram

Loading diagram…

4. State / Sequence Diagram (where relevant)

Loading diagram…

5. Design Patterns Applied

  • No classic Gang-of-Four wrapper required — structure is hash map + intrusive list. Optional Decorator for metering hits/misses: Decorator pattern.

6. Implementation

Go

package lrucache

type ListNode struct {
    Key           string
    Value         any
    Previous      *ListNode
    Next          *ListNode
}

type LRUCache struct {
    capacity    int
    keyToNode   map[string]*ListNode
    head, tail  *ListNode
}

func NewLRUCache(capacity int) *LRUCache { /* ... */ }
func (cache *LRUCache) Get(key string) (any, bool) { /* ... */ }
func (cache *LRUCache) Put(key string, value any) { /* ... */ }

JavaScript

class ListNode {
  constructor(key, value) {
    this.key = key;
    this.value = value;
    this.previous = null;
    this.next = null;
  }
}

class LRUCache {
  constructor(capacity) {
    this.capacity = capacity;
    this.keyToNode = new Map();
    // sentinel head/tail and list helpers
  }
  get(key) { /* ... */ }
  put(key, value) { /* ... */ }
}

7. Concurrency / Thread Safety

  • Collisions: Parallel Put same key; parallel Get while eviction runs.
  • Granularity: Single sync.RWMutex on LRUCache for Go; prefer writer lock on mutation, reader on Get if list updates are atomic under lock.
  • JavaScript: Serialize async callers with a mutex library or keep single-threaded and document.

8. Extensibility & Followups

See also

  • 146 LRU Cache (problem write-up)

  • Sharded LRU by hash of key for lower lock contention.

  • Weighted capacity when values have variable byte size.

  • Time-to-live eviction layered on top of recency (separate index or lazy expiry on Get).

Last updated on

Spotted something unclear or wrong on this page?

On this page