THN Interview Prep

AWS data services & Node.js at scale

Staff interviews expect you to connect service shape (many Node.js processes in Docker / ECS / EKS) to data-plane limits: connections, partitions, and client behavior on failover.


RDS (Postgres / MySQL family)

ConcernSDE3 answer
Multi-AZSync replica for failover; not a read-scaling tier by itself
Read replicasLag—not linearizable by default; route fresh reads to primary
Connection stormmax_connections shared; each pod × pool size adds up
RDS ProxyMultiplexes app connections → DB connections; helps Lambda/burst and failed connection cleanup; still respect transaction semantics
Loading diagram…

Node.js: use a small bounded pool per process; prefer one shared pool per app instance, not per request.


DynamoDB (single-table & access-path thinking)

MechanismInterview cue
Partition key + sort keyDesign for hot path; avoid hot partition on celebrity keys
LSISame partition key, alternate sort—throughput shared with base table
GSIDifferent partition; async projection cost; watch hot GSI partitions
Conditional writesCompare-and-set / idempotent guards
TransactWriteItemsAll-or-nothing within Dynamo for involved items—still not your cross-service saga
Streams + Lambda / consumersCDC-shaped events for projections
Loading diagram…

Consistency: default reads eventually; strongly consistent read when you must see latest write on that key (extra RCU); not a substitute for global cross-table transactions.

DAX: in-memory read-through cache for Dynamo; treat like Redis—still plan staleness and invalidation story.


ElastiCache (Redis OSS engines on AWS)

ModeNarrative
Primary + replicaFailover promotes replica; clients must reconnect
Cluster modeSharding across nodes; MOVED/ASK-class routing in clients
Multi-AZDurability / HA posture—still cache not source of truth

Eviction: set maxmemory-policy; monitor evicted keys and CPU.


Memcached vs Redis (when interviewers compare)

MemcachedRedis
Data structuresBlob key/valuestrings, hashes, lists, sets, sorted sets…
Replication / HAclient-side sharding historicallybuilt-in replication, cluster
Use casesimple cache, TTL, multi-tenant simplecache + coordination (careful), pub/sub
Team taxlower feature surfacemore foot-guns (blocking ops, big keys)

For pure session cache or HTML fragment cache with no structures, Memcached can win on simplicity; Redis when you need structures or HA cluster product path (still prefer managed ElastiCache).


Diagram — typical full-stack AWS data plane

                    ┌─────────────────┐
   Browser / App ───►│  CloudFront/CDN  │
                    └────────┬────────┘

                    ┌────────▼────────┐
   Next.js / API   │  ALB → ECS/K8s    │
   (Node servers)  │  (Docker tasks)   │
                    └─┬───────┬─────┬─┘
                      │       │     │
            ┌─────────▼──┐ ┌──▼─────────┐ ┌──▼────────────┐
            │ ElastiCache│ │ DynamoDB   │ │ RDS + Proxy  │
            │ (cache)    │ │ (OLTP KV)  │ │ (relational) │
            └────────────┘ └────────────┘ └─────────────┘

Interview prompts & patterns

Q: “Why RDS Proxy?”
A: Connection multiplication across many short-lived or bursty clients; faster failover cleanup; still design pool sizes and avoid chatty transactions.

Q: “How do you avoid Dynamo hot partitions?”
A: Key design (shuffle/split writes), write sharding, caching hot items, burst capacity / on-demand; measure ConsumedThroughput skew.


See also

Last updated on

Spotted something unclear or wrong on this page?

On this page