THN Interview Prep

Meta — Interview Playbook (Senior IC orientation)

Public patterns and common community reports shape this guide; your recruiter packet is the source of truth for format and level.

Production mindset (what it usually means in interviews)

Interviewers often probe whether you think beyond the whiteboard:

  • Failure and operations: retries, idempotency, partial failures, monitoring, rollbacks.
  • Data and consistency: what breaks if messages duplicate or arrive late; caching and invalidation discipline.
  • Performance at scale: hot keys, fan-out, bounding memory and latency—not buzzwords, but specific risks for this design.
  • Velocity with guardrails: feature flags, staged rollout, experiment safety when relevant.

You do not need to recite internal stack names; you need clear reasoning tied to user-visible or business-visible outcomes.

Move fast vs careful: how to talk about tradeoffs

Meta’s culture is often described as high velocity; in interviews, show that you know when to move fast:

  • Prototype or experiment: narrow scope, measurable guardrails, fast revert path.
  • Core correctness / privacy / money paths: extra review, testing, and blast-radius limits.

Articulate explicit tradeoffs: “We chose X for time-to-market with mitigation Y; Z is the follow-up if metrics show …”

Good signals: risk proportionality, measurement, and iteration, not recklessness or endless analysis.

Coding rounds — patterns reported often

Frequency varies by team and level; these appear repeatedly in public prep material and align with strong general practice:

AreaWhy it shows up
Arrays / strings / intervalsRealistic API and log shaping; many follow-ups (streaming, merging constraints).
Graphs (BFS/DFS, shortest path thinking)Social graphs, infrastructure dependencies; clarifying directed vs undirected matters.
Trees and recursion / iterationClean invariant practice; follow-ups on memory or iterative approaches.
Heap / priority queue / schedulingRate limits, top-K, merging sorted streams.
Hash maps / sets / deduplicationCounting, frequency, sliding windows when constraints fit.
Dynamic programming (selected mediums + some hards)When constraints scream optimization and greedy fails; explain recurrence clearly.

Treat follow-ups as normal: “what if the input does not fit in memory,” “what if we need online queries.” Practice stating memory models explicitly.

System design (when in your loop)

Bias toward large-scale distributed scenarios: many requests, many regions, dependencies between services. Still start from requirements and user-facing behavior before internals.

Be ready to discuss:

  • Consistency expectations for the feature (read-your-writes vs eventual in different surfaces).
  • Backpressure and overload behavior.
  • Operational visibility: what you’d chart and alert on first.

Behavioral

Stress impact, collaboration across functions, and how you learn from production signals. Use real stories; avoid claiming metrics you cannot support (see Behavioral STAR guide).

Practical prep cadence

  • Mix timed coding with one deep follow-up per problem (variant or constraint change).
  • Weekly: one design with explicit failure and rollout section.
  • Before onsite: dry-run talking through one incident-style story and one large-scope delivery story with timestamps and roles clear.

Experiments and safety (how to discuss responsibly)

If behavioral prompts touch experimentation or launch:

  • Separate user harm risk from latency risk; name guardrails (caps, kill switches) without claiming tools you did not use.
  • Prefer outcome language: hypothesis, metric moved or did not, decision to ship or revert—avoid vanity metrics unless they were truly the bar.

Coding session shape that reads well

  • Restate constraints and give two approaches when natural; pick one with a one-line rationale.
  • Implement the happy path, then patch edge cases in a second pass if time is tight.
  • Trace one non-trivial example on the board or in comments before declaring done.

System design depth checklist

Beyond boxes and arrows, be ready to explain:

  • Write path vs read path hot spots and how you isolate failure between them.
  • Queue depth and consumer lag: what alerts fire before users notice pain.
  • Schema or contract evolution: additive changes, dual writes, or feature flags when migrations are risky.

Template walkthrough: System design interview framework (RESHADED)

Common follow-up angles on DSA problems

  • Multiple queries on the same structure: preprocessing vs per-query cost.
  • Dynamic updates after static optimal solution: segment trees, Fenwick, or heap maintenance—only when the problem steers there.
  • K-best or ranked outputs: heap / binary search on answer patterns.

Drill these as extensions after solving the base problem in this repo’s study flow (12-week study roadmap).

Behavioral: impact without exaggeration

Meta-oriented loops often probe scale of impact. Answer with role, mechanism, and evidence type (experiment, incident trend, customer ticket reduction). If you cannot disclose numbers, describe directionally what improved and how you measured internally.

See Behavioral STAR guide for STAR framing.

Pitfalls specific to “move fast” narratives

  • Sounding reckless rather than bounded: always pair speed with mitigation or observability.
  • Buzzword reliability (chaos, SRE) without tying to a concrete scenario the interviewer gave.

Before virtual onsite

  • Test IDE or whiteboard tool; have a timer habit for each segment.
  • Keep water nearby; structured breaks between rounds are often scheduled—use them to reset state, not to cram new facts.

Last updated on

Spotted something unclear or wrong on this page?

On this page