THN Interview Prep

Design LFU Cache (LLD)

1. Requirements

  • Functional

    • Fixed capacity; evict the least frequently used key; tie-break on least recently used among equal frequency.
    • Get increments use frequency; Put inserts or updates value.
    • Target O(1) average per operation using augmented structures.
  • Non-Functional

    • Correct eviction under concurrent access if multi-threaded.
    • Predictable memory: O(capacity) auxiliary structures.
  • Assumptions / Out of Scope

    • Frequency decay or sliding window is out of scope (would layer policy on top).

2. Core Entities

EntityResponsibilityKey Attributes
LFUCacheAPI and eviction orchestrationcapacity, keyToNode, frequencyBuckets
FrequencyBucketLinked list of keys sharing same countfrequency, headNode, tailNode
NodeEntry in frequency listkey, value, frequency, previous, next
MinTrackerTracks current minimum frequency for evictionminFrequency

3. Class Diagram

Loading diagram…

4. State / Sequence Diagram (where relevant)

Loading diagram…

5. Design Patterns Applied

  • Composite view — Frequency levels as buckets of nodes (conceptual composite of lists). Reference Composite pattern if treating bucket as uniform container.

6. Implementation

Go

package lfucache

type Node struct {
    Key, Value   string
    Frequency    int
    Previous     *Node
    Next         *Node
}

type LFUCache struct {
    capacity          int
    keyToNode         map[string]*Node
    frequencyToBucket map[int]*FrequencyBucket
    minFrequency      int
}

func NewLFUCache(capacity int) *LFUCache { /* ... */ }
func (cache *LFUCache) Get(key string) (string, bool) { /* ... */ }
func (cache *LFUCache) Put(key string, value string) { /* ... */ }

JavaScript

class LFUCache {
  constructor(capacity) {
    this.capacity = capacity;
    this.keyToNode = new Map();
    this.frequencyToBucket = new Map();
    this.minFrequency = 0;
  }
  get(key) { /* ... */ }
  put(key, value) { /* ... */ }
}

7. Concurrency / Thread Safety

  • Collisions: Two threads incrementing same key; eviction while another inserts.
  • Granularity: One sync.Mutex around entire struct for simplicity; optimize with fine-grained locks per bucket later.
  • JavaScript: Document single-threaded semantics or add async mutex.

8. Extensibility & Followups

See also

  • 460 LFU Cache (problem write-up)

  • Aging: periodically decay frequencies via scheduled job without breaking O(1) hot path if batched.

  • Variable-size values: capacity by bytes not entry count.

  • MiniLFU admission filter as optional front door before main LFU structure (HLD-style caching discussion).

Last updated on

Spotted something unclear or wrong on this page?

On this page