THN Interview Prep

Beyond the Framework: Mastering Node.js Core for High-Scale Systems

By a Senior Backend Engineer

Interview Preparation Guide

After years of reviewing pull requests and debugging production incidents, I’ve noticed a pattern. Junior developers know frameworks; they can spin up an Express server or build a REST API in NestJS in minutes. But Senior developers know the runtime.

When a server crashes due to an "Out of Memory" error or latency spikes during high traffic, knowing how to use req.body won't save you. Understanding the Event Loop, Buffers, and Streams will.

If you want to level up, stop looking for new libraries and start looking at Node.js Core. Here is what you need to master.


1. The Event Loop: It’s Not Just "Magic"

Most devs know Node is single-threaded and non-blocking. But can you explain the difference between process.nextTick(), setImmediate(), and setTimeout()?

In high-throughput systems, this distinction determines latency.

  • Microtasks (process.nextTick): These run immediately after the current operation completes, before the Event Loop continues. Overusing this (recursive calls) starves the I/O phase, blocking your server.
  • Macrotasks (setImmediate vs setTimeout):
  • setTimeout guarantees a minimum delay.
  • setImmediate is designed to run in the Check phase of the next loop iteration.

Senior Tip: Use setImmediate if you need to break up long-running synchronous CPU tasks (like parsing a massive JSON) to allow the event loop to handle other incoming requests in between.


2. Streams & Backpressure: Handling Data at Scale

I see this mistake constantly: reading a 500MB file into memory using fs.readFile.

// The Junior Mistake: DOOM (Death Out Of Memory)
fs.readFile("huge-log.txt", (err, data) => {
  res.send(data); // RAM spikes, GC goes crazy, server crashes.
});

Node.js handling large data is all about Streams. Streams process data chunk by chunk. However, the real secret sauce is Backpressure.

If the disk reads faster than the network can send, data piles up in RAM. Node’s internal buffer fills up. If you don't handle backpressure, you are just delaying the crash.

Senior Tip: always use stream.pipeline instead of .pipe(). Standard piping does not forward errors automatically, leaving you with memory leaks if a stream fails.

const { pipeline } = require("stream");

// The Senior Solution
pipeline(
  fs.createReadStream("huge-log.txt"), // Source
  zlib.createGzip(), // Transform
  res, // Destination
  (err) => {
    if (err) console.error("Stream failed", err);
  }
);

3. Buffers and Character Encoding

Strings are expensive. In V8, strings are immutable and complex. When you are dealing with binary data (file uploads, image processing, TCP streams), Buffers are your best friend.

A Buffer is a chunk of raw memory allocated outside the V8 heap.

  • Performance: Manipulating bits in a Buffer is significantly faster than string manipulation.
  • Encoding: Be careful with utf-8 conversions. If you split a multi-byte character halfway through a chunk, you get garbage symbols ().

Senior Tip: Always use the StringDecoder module when converting incoming buffer streams to text, as it correctly handles multi-byte characters split across chunks.


4. Worker Threads: Breaking the Single-Thread Barrier

"Node is bad for CPU-intensive tasks." This was true in 2016. It is not true today.

With Worker Threads (worker_threads module), you can spawn isolated threads that share memory via SharedArrayBuffer.

  • Use Cases: Image resizing, cryptography, PDF generation, AI model inference.
  • The Trap: Do not use Workers for I/O (DB queries, API calls). Node’s built-in async I/O is already more efficient than spinning up a thread for that.

5. Observability with Async Hooks

How do you trace a request across multiple async callbacks and promises?

The async_hooks module provides an API to track the lifetime of asynchronous resources. This is how tools like New Relic or Datadog work under the hood. As a senior dev, you might use AsyncLocalStorage to store the "Request ID" (correlation ID) to ensure your logs makes sense.

const { AsyncLocalStorage } = require("async_hooks");
const asyncLocalStorage = new AsyncLocalStorage();

// Wrap your request
asyncLocalStorage.run({ requestId: "123-abc" }, () => {
  // Anywhere deep in your code (DB layer, Service layer)
  const store = asyncLocalStorage.getStore();
  console.log(`[${store.requestId}] Processing user data...`);
});

Final Thoughts

Mastering Node.js isn't about memorizing the docs. It's about understanding the cost of your code.

  • Every await pauses execution.
  • Every object created adds to Garbage Collection pressure.
  • Every unhandled error in a stream is a potential memory leak.

Stop writing code that "just works." Start writing code that stays up when the traffic hits 100k requests per second.


Would you like me to create a deep-dive code example on implementing Worker Threads or AsyncLocalStorage for a specific use case?

Last updated on

On this page