THN Interview Prep

Frontend testing & observability

Frontend quality is not one test type. A reliable frontend combines static checks, focused component tests, realistic browser flows, accessibility coverage, performance signals, and production telemetry that catches what lab tests miss.

Core details

LayerCatchesDoes not catch well
Type/static checkscontracts, impossible states, import mistakesreal browser behavior
Unit testspure logic, reducers, formatters, validatorsintegration and focus behavior
Component testsrendering states, events, accessibility namesfull routing/network/device issues
E2E testscritical user journeys in a browserevery edge case; they are slower/flakier
Visual testslayout regressions and design drifthidden accessibility or data bugs
A11y testsmissing names, invalid roles, contrast baselinenuanced keyboard/screen reader flows
Performance testsbundle/trace regressionsfield-only device/network variance
RUM/observabilityreal user errors and latencyexact local reproduction by itself

Testing pyramid for UI: keep pure logic heavily unit-tested, component states covered with focused tests, and E2E reserved for high-value journeys: auth, checkout, create/edit/delete, navigation, and permission boundaries.

Observability basics: capture client errors, route transitions, failed resource loads, hydration warnings, web vitals, long interactions, API failure rates, and release version. Logs without release/user-agent/route context are hard to use.

Error boundaries: design fallbacks by product boundary. A broken chart should not blank the whole dashboard; a broken authenticated shell may need a route-level recovery path.

Understanding

Frontend failures are often stateful and environmental. Browser extensions, locale, timezone, network retries, mobile CPU, old cached assets, and hydration order can expose bugs that unit tests never see.

The goal is not maximum test count. It is fast feedback for cheap mistakes and targeted confidence for expensive workflows. Every flaky E2E test should justify its cost by protecting a journey that matters.

Observability closes the loop. A release can pass tests and still regress low-end devices, a specific locale, or a cache transition. Field data tells you where to reproduce and which users were harmed.

Practical examples

Test matrix by feature:

FeatureMinimum coverage
Formvalidation unit tests, component error/focus tests, one submit E2E
Dialog/comboboxcomponent keyboard tests, accessibility scan, screen-reader spot check
Route dataloader/action tests where possible, E2E for auth and stale state
Heavy dashboardbundle budget, lab trace, RUM web vitals
Payment/security flowE2E happy path, failure path, duplicate-submit/idempotency test

Telemetry payload shape:

type ClientEvent = {
  release: string;
  route: string;
  userAgent: string;
  event: "error" | "web_vital" | "hydration_warning" | "api_failure";
  severity: "info" | "warn" | "error";
};

Include enough context to group and bisect, but avoid sensitive payloads.

Senior understanding

ProbeStrong answer
“Why flaky E2E?”Identify timing, network, test data, animation, and selector instability
“A11y automated enough?”No; combine automated scans with keyboard and AT walkthroughs
“Performance in CI?”Use stable budgets for bundles; traces only where reproducible
“Production error?”group by release/route/browser, feature flag rollback, user impact

Failure modes

  • Testing implementation details instead of user-visible behavior.
  • Using brittle selectors instead of roles, labels, and stable test IDs where appropriate.
  • Ignoring hydration warnings because the page “looks fine.”
  • Capturing client logs without release version or route.
  • Running only desktop Chrome tests for mobile-heavy products.
  • Treating visual snapshots as accessibility coverage.

Diagram

Loading diagram…

See also

Spotted something unclear or wrong on this page?

On this page