Main-thread scheduling & responsiveness
Core details
- Main-thread monopoly: DOM, JS (default), parsing (partially chunked), plenty of timers—anything long blocks input until completion.
- Long task heuristic: interactions feel broken when uninterrupted JS exceeds budgets—pair with Interaction INP/long-frame mindset regardless of acronym drift.
- Scheduling patterns: chunked work queues,
requestAnimationFrame,scheduler.postTaskecosystem ideas, yielding between heavy pure-compute slices co-operatively without starving animation goals. - Idle opportunism:
requestIdleCallbackfor non-critical maintenance—expire deadlines honestly. - Profiler workflow: Bottom-up JS profiling + Main thread flame categories; annotate user interaction windows when reproducing responsiveness bugs.
Understanding
Humans correlate interaction latency + motion smoothness, not averages. Burst CPU from unrelated feature flags can hijack responsiveness even if mean CPU looks fine. Scheduling is bargaining between latency (defer work), fairness (don’t starve animation), throughput (finish batch jobs). Yielding blindly can thrash caches—balance chunk sizes experimentally profiling.
Heavy layout/paint interplay often masquerades as “slow JS”; triage distinguishes orange scripting flame vs purple layout timelines.
Senior understanding
Articulate bridging telemetry: field RUM long interaction vs lab synthetic reproducibility discrepancy. Mention interactionId timelines when discussing regression bisect—not only Lighthouse snapshot.
Escalation path: micro-optimization fails ⇒ architecture shifts (workers, simplifying render graphs, rewriting hot selectors—not deeper micro-patching loops).
Operational guard: performance budgets anchored to interaction-duration percentiles; optional synthetic CI traces when reproducible harness exists—call out flaky vs stable signals honestly.
Diagram
See also
Last updated on
Spotted something unclear or wrong on this page?