Caching & consistency
Core details
Layers you can stack in one answer: browser/CDN → app local → shared remote (Redis/mem) → DB buffer pool. Each layer differs in latency win vs staleness risk.
Cache stampede: many clients miss simultaneously—mitigate with jittered TTL, early probabilistic refresh, or request coalescing / single-flight fetch.
Patterns: cache-aside vs read-through vs write-through—state who owns invalidation triggers (event bus, version bump, explicit delete).
Understanding
Caches trade freshness for cost & latency. Wrong layer choice leaks sensitive personalized HTML at CDN or serves money-moving reads from stale snapshots without UI disclosure—product + compliance failure, not “just cache config.”
Distributed invalidation is eventual—design user-visible honesty or stronger read paths when correctness demands it.
Senior understanding
| Tension | Staff response |
|---|---|
| Thundering herd metrics | track miss spike + origin QPS sync |
| Multi-tenant poisoning | namespace keys + ACL metadata never global search |
| Debugging ghost states | correlation ids + cache key version logging sampling conscious cardinality discipline |
Mention comparison with feature flags disabling cache path during incident—operational muscle.
Diagram
See also
Spotted something unclear or wrong on this page?