THN Interview Prep

Caching & consistency

Core details

Layers you can stack in one answer: browser/CDN → app local → shared remote (Redis/mem) → DB buffer pool. Each layer differs in latency win vs staleness risk.

Cache stampede: many clients miss simultaneously—mitigate with jittered TTL, early probabilistic refresh, or request coalescing / single-flight fetch.

Patterns: cache-aside vs read-through vs write-through—state who owns invalidation triggers (event bus, version bump, explicit delete).

Understanding

Caches trade freshness for cost & latency. Wrong layer choice leaks sensitive personalized HTML at CDN or serves money-moving reads from stale snapshots without UI disclosure—product + compliance failure, not “just cache config.”

Distributed invalidation is eventual—design user-visible honesty or stronger read paths when correctness demands it.

Senior understanding

TensionStaff response
Thundering herd metricstrack miss spike + origin QPS sync
Multi-tenant poisoningnamespace keys + ACL metadata never global search
Debugging ghost statescorrelation ids + cache key version logging sampling conscious cardinality discipline

Mention comparison with feature flags disabling cache path during incident—operational muscle.

Diagram

Loading diagram…

See also

Spotted something unclear or wrong on this page?

On this page