THN Interview Prep

Application caching & consistency

At SDE3, “we added Redis” is insufficient. You need topology (process-local vs cluster), eviction math, failure amplification (herd, avalanche, penetration), and consistency (invalidation vs dual-write races)—especially on AWS (ElastiCache / Redis OSS) behind Node.js services in Docker.

Core details

The database (or event log) is authoritative; cache is derived unless product and compliance accept staleness.

Patterns (application-level)

PatternRead pathWrite pathGotcha
Cache-aside (lazy)App: cache miss → load DBSET cacheApp writes DB; delete or update cache asyncCold start miss storms
Read-throughCache library loads on missSame as your write policyCache module must share your key schema
Write-throughReads from cache after warmWrite DB + cache togetherSlower writes; still not atomic across nodes without care
Write-behind (write-back)Fast reads from cacheACK after cache; async flush to DBDurability risk—loss on crash unless durable queue

AWS mapping: ElastiCache for Redis is usually cache-aside or read-through with your app code; DynamoDB DAX is read-through accelerating Dynamo with explicit consistency semantics—same dual-write cautions if you also roll your own invalidation.

Sequence — cache-aside (lazy loading)

Loading diagram…

Sequence — write-through (sketch)

Loading diagram…

Deployment topologies

TierProsCons
Embedded / in-process (Node heap, lru-cache, etc.)Sub-ms; no networkNo sharing across Docker replicas; cold every deploy
Distributed (Redis cluster, ElastiCache)Shared state; TTL centralizedNetwork; failure = thundering herd to DB
CDN / edgestatic & publicPurging, personalization traps

Node.js note: each container has its own heap—LRU in-process is a second tier above Redis for hot keys with short TTL and tolerance for inconsistency.

Loading diagram…

Eviction & memory

PolicyBehaviorWhere seen
TTLKey expires after wall clockEverywhere; combine with jitter
LRUEvict least recently usedClassic Redis volatile-lru; in-proc caches
LFUEvict least frequently usedResists one-off scans
W-TinyLFUWindow + TinyLFU (approx frequency)Caffeine (JVM); high hit rate; concept appears in advanced local caches

Redis / ElastiCache: understand maxmemory-policy; no eviction + full memory → writes fail or OOM—operational playbook required.

Senior pitfalls (“gotchas”)

Cache penetration

Attack / bug: repeated misses for non-existent keys → thunder to DB.

Mitigations: cache negative results with short TTL; Bloom filter (or set membership) in front; rate limit suspicious keys.

Cache breakdown (thundering herd)

Hot key TTL expires → many concurrent misses → DB spike.

Mitigations: single-flight / mutex per key (only one loader); probabilistic early expiration (refresh before TTL); jitter; prefetch on schedule.

Cache avalanche

Many keys share TTL (e.g. same absolute expiry after deploy) or cluster reboot → mass miss.

Mitigations: randomized TTL (TTL ± jitter); layered TTLs; circuit breaker to DB; graceful degradation (stale-while-revalidate semantics where safe).

Dual-write race (DB vs cache order)

Classic bug: write DB then delete cache—but reader refreshes cache before writer completes → serves stale after “update”.

Better patterns: cache-aside on write (invalidate, not blind update); version in cache key; outbox + async invalidator; short TTL for contested keys.

Loading diagram…

Invalidation strategies

ApproachWhen
TTL onlyOK for low-risk data; define max staleness
Write path DELETE keyCommon; watch race above
Pub/sub channel invalidate:{id}Good for fan-out across Node processes
Version bump key:v42Cheap consistency check for personalized reads

Understanding

Wrong layer: personalized HTML at CDN without surrogate keys; balances from long TTL without UI disclosure.

Distributed invalidation is eventual—money reads: primary DB or strong read path + tight TTL.

Senior understanding

TensionStaff move
Stale business decisionskey versioning, shorter TTL for hot keys, feature-flag cache off
Multi-tenant poisoningnamespace keys; never global flush
ElastiCache failoverclients retry; expect short spike → breaker + jitter

Deep sibling: Caching & consistency.

Diagram (write invalidates cache-aside)

Loading diagram…

See also

Last updated on

Spotted something unclear or wrong on this page?

On this page