THN Interview Prep

Hard Common Node.js Interview Questions

1. How would you architect a scalable Node.js application?

I would adopt a modular architecture, likely Hexagonal or Clean Architecture, to separate business logic from frameworks (Express). For scalability, I'd design stateless services to run in containers (Docker/K8s), allowing horizontal scaling. I'd implement caching (Redis), use a message queue for async processing, and ensure strict separation of concerns.

2. Why is Node.js a good fit for Microservices?

Node.js has a small footprint and fast startup time, making it ideal for containerization. Its non-blocking nature handles high throughput for I/O-bound services (common in microservices). The JSON-native environment also simplifies communication between services.

3. How do you handle inter-service communication?

For synchronous needs, REST or gRPC (Protobufs) are used. However, for loose coupling and resilience, I prefer asynchronous communication using Message Brokers like RabbitMQ or Kafka. This ensures that if a consumer service is down, the message isn't lost but queued.

4. Pros and Cons of Serverless with Node.js.

Pros: Zero server management, infinite auto-scaling, and cost-efficiency for sporadic workloads. Cons: 'Cold Start' latency can affect performance. Vendor lock-in is a risk. Debugging and monitoring distributed serverless functions is significantly harder than a monolith.

5. How do you implement Observability in production?

Logging is not enough. We need the three pillars: Structured Logs (JSON format for querying), Metrics (Prometheus for time-series data like CPU/Memory), and Distributed Tracing (OpenTelemetry) to visualize a request's journey across microservices.

6. Best practices for CPU-intensive tasks?

Node.js is bad at CPU tasks on the main thread. Best practice is to offload them entirely: either to a separate Worker Thread, or preferably, to a dedicated microservice written in a language suited for computation (like Go, Rust, or Python) to avoid blocking the API gateway.

7. How do you optimize a high-traffic API?

  1. Database Indexing (crucial).
  2. Caching (Redis) at the edge or app level to reduce DB hits.
  3. Horizontal Scaling with Load Balancers.
  4. Code optimization (removing synchronous calls, optimizing loops).
  5. Using streams for large data payloads.

8. Why use TypeScript in Node.js projects?

In large teams/codebases, dynamic typing becomes a liability. TypeScript provides static analysis, ensuring type safety at build time. It serves as self-documentation, enables better IDE refactoring tools, and drastically reduces runtime 'undefined is not a function' errors.

9. Describe a robust CI/CD pipeline for Node.js.

Push code -> Trigger Pipeline -> Lint & Unit Test -> Build (Transpile TS) -> Integration Test -> Build Docker Image -> Scan for Vulnerabilities -> Push to Registry -> Deploy to Staging -> E2E Tests -> Blue/Green Deployment to Production.

10. Deep Dive: exec() vs spawn() regarding memory.

exec() buffers the entire output of the process in memory before returning it. If the output exceeds the buffer limit, the process crashes. spawn() returns a stream. For any process where output size is unknown or large, spawn() is mandatory to prevent memory overflows.

11. How do you prevent Event Loop starvation?

Avoid CPU-intensive work on the main thread. Don't use synchronous I/O. Be careful with process.nextTick recursive calls. We monitor 'Event Loop Lag' as a key metric; if lag increases, the server is overloaded or blocked.

12. How do you secure HTTP headers?

Using Helmet middleware is standard. The most complex but important header is CSP (Content Security Policy) to mitigate XSS. We also enforce HSTS (Strict Transport Security) to force HTTPS and disable X-Powered-By to hide the tech stack.

13. Role of OpenTelemetry?

It is a vendor-neutral standard for collecting telemetry data. It allows us to instrument our application once and send data to any backend (Jaeger, Prometheus, Datadog). It's critical for correlating logs and traces in a distributed system.

14. How do you handle Unhandled Exceptions/Rejections?

You can listen for uncaughtException, but you must exit the process (process.exit(1)). The application state is likely corrupted. Rely on a process manager (K8s/PM2) to restart the app cleanly. Never try to 'resume' after an uncaught exception.

15. How does the Node.js Process Model affect Scalability?

It scales horizontally extremely well (adding more nodes) but poorly vertically (adding more CPU to one thread). Therefore, the architecture must be designed to be stateless, allowing us to spin up 100 small instances rather than 1 giant instance.

16. What is require.resolve used for?

It performs the module resolution algorithm (finding the file path) without loading/executing the module. It's used in tooling (like bundlers or test runners) to check if a dependency exists or to get its location on the disk without the side effect of running code.

17. High Availability and Load Balancing strategies.

We run multiple instances of the app behind a Load Balancer (Nginx/HAProxy/ALB). We use Health Checks to ensure the LB only sends traffic to healthy nodes. We avoid 'Sticky Sessions' to ensure any node can handle any request, which is vital for failover.

18. Explain the Reactor Pattern.

It is the design pattern driving Node's Event Loop. An event demultiplexer observes resources and notifies a dispatcher when an event (I/O) occurs. The handler is then executed. This inversion of control allows a single thread to manage thousands of concurrent connections.

19. Significance of the zlib module?

It handles compression (Gzip/Brotli). Enabling response compression (usually via Nginx or middleware) drastically reduces JSON/HTML payload sizes, improving client load times and reducing bandwidth costs.

20. Net vs Dgram modules.

net is for TCP (reliable, connection-based) used for HTTP/DB protocols. dgram is for UDP (unreliable, connectionless, fast) used for video streaming or gaming where packet loss is acceptable but latency is not.

21. How to optimize Docker for Node.js?

Use Multi-Stage Builds to discard build tools (like Python/Make for node-gyp) in the final image. Use lightweight base images (Alpine). Ensure node_modules are installed with --production. Run as a non-root user for security.

22. How to implement distributed Rate Limiting?

In-memory rate limiting works for one instance but fails with clustering. We use a centralized store like Redis (Token Bucket algorithm) to track request counts per IP across all instances of the application.

23. Compare Bun/Deno with Node.js.

Node is mature with a massive ecosystem. Deno focuses on security (permissions) and TypeScript. Bun focuses on raw speed. For enterprise, Node is still the choice due to stability, but Bun is promising for tooling/scripts due to its performance.

24. Managing Database Connections efficiently.

Always use Connection Pooling. Establishing a TCP connection (handshake + auth) is expensive. A pool keeps connections open and reuses them. We must tune the pool size based on the CPU limits of the database server.

25. When to use Message Queues (RabbitMQ/Kafka)?

Use them to decouple services and handle backpressure. If Service A produces data faster than Service B can consume, a queue buffers the load. It also ensures data durability; if B crashes, the message stays in the queue.

26. Performance trade-offs of Async/Await?

Async/Await runs sequentially by default. a = await x(); b = await y(); takes (x+y) time. Promises allow Promise.all([x(), y()]) taking max(x,y) time. Developers often accidentally force sequential execution with await, hurting performance.

27. Caching strategies in Node.js.

We use the 'Cache-Aside' pattern. Check Redis -> Return if found. If not, query DB -> Store in Redis with TTL -> Return. TTL (Time To Live) is critical to prevent serving stale data indefinitely.

28. What is esbuild?

A next-gen bundler written in Go. It is orders of magnitude faster than Webpack. While not a runtime, it is increasingly used in the Node ecosystem (e.g., in Vite or for bundling serverless functions) to speed up deployment builds.

29. Event Delegation in Node.js?

While usually a DOM concept, in Node.js streams or sockets, we often attach a single listener to a central Emitter to manage routing logic based on the event type/payload, rather than attaching listeners to every individual object, saving memory.

30. How to write V8-optimized JavaScript?

Maintain 'Hidden Classes' (Object shapes). Always initialize objects with properties in the same order. Avoid adding/deleting properties dynamically, as this forces V8 to bail out of optimization (de-opt) and revert to slower execution paths.

31. JWT Best Practices.

Sign with a strong secret (RS256). Keep payloads small. Set short expirations (e.g., 15 min) and use Refresh Tokens. Never store sensitive PII in the token payload as it is Base64 encoded, not encrypted.

32. Handling I18n/L10n in backend.

We detect locale via Accept-Language headers. We use libraries like i18next to load JSON translation files. Key challenge is date/currency formatting, which should be handled by standardizing on ISO formats internally and formatting only at the presentation layer.

33. Server-Side Rendering (SSR) trade-offs.

SSR (e.g., Next.js) improves SEO and First Contentful Paint. However, it shifts rendering load from the client to the server, making the Node.js layer CPU-bound. This requires significantly more server resources and caching strategies compared to serving a static SPA.

34. Monolith vs Microservices: Decision Criteria.

Start with a Monolith. Only split when specific domains need to scale independently, or when team sizes grow too large to work on a single repo. Microservices introduce 'Distributed Complexity'—network latency, data consistency issues—which shouldn't be incurred prematurely.

35. DDoS protection for Node.js servers.

Node.js should not be the frontline defense. Use a reverse proxy (Nginx) or CDN (Cloudflare) to absorb volumetric attacks. At the app level, implement Rate Limiting and Payload size limits to prevent resource exhaustion.

36. Utility of util.promisify.

It bridges the gap between legacy callback-style APIs and modern Async/Await. It wraps a callback-based function and returns a Promise, allowing us to modernize older codebases without rewriting the underlying libraries.

37. Why is Telemetry vital?

In distributed systems, failure is inevitable. Telemetry allows us to answer 'Why is it failing?'. Without it, debugging production issues is guesswork. It drives capacity planning and performance optimization based on real data.

38. Promise.all() vs Promise.race() use cases.

all is for aggregation (get User AND get Posts). race is useful for timeouts: race the fetch request against a 5-second timer Promise that rejects. Whichever finishes first wins.

39. Deep Dive: setImmediate vs setTimeout(fn, 0).

setImmediate is designed to run in the 'Check' phase, specifically after I/O events. setTimeout runs in the 'Timer' phase. setTimeout(fn, 0) forces the timer phase but has a minimum delay (1ms+). setImmediate is generally cleaner for I/O related recurrence.

40. How to stay current in the Node ecosystem?

Follow the Node.js Release Working Group. Watch TC39 proposals (JS language updates). Follow discussions on GitHub for key ecosystem tools (Vite, Prisma, Fastify). Understanding the 'Why' behind changes is more important than just the syntax.

41. What are the different core modules used in Node.js?

Essential built-in modules include: http/https (creating servers), fs (file system operations), path (file path manipulations), os (OS information), events (event emitters), stream (handling streaming data), and crypto (encryption/hashing).

42. How is clustering used to enhance the performance of Node.js?

Since Node.js is single-threaded, it runs on only one CPU core by default. The Cluster module allows you to fork the main process into multiple worker processes (typically one per CPU core). These workers share the same server port but run independently, allowing the application to utilize the full processing power of a multi-core machine.

43. Is cryptography supported in Node.js?

Yes, via the built-in crypto module. It provides wrappers for OpenSSL functionality, allowing you to generate hashes (SHA), perform encryption/decryption (AES), sign/verify data, and manage keys. It is essential for securing passwords (pbkdf2 or scrypt) and handling SSL/TLS protocols.

44. Why should Express 'app' and 'server' creation be kept separate?

Separating the app definition (const app = express()) from the server listener (app.listen()) is a best practice for Testing. It allows you to import the app into integration testing tools (like Supertest) without starting the network listener, preventing 'port in use' errors and speeding up test suites.

45. What is libuv in Node.js?

Libuv is a multi-platform C++ library that provides Node.js with its asynchronous I/O capabilities. It implements the Event Loop and manages the Thread Pool (for blocking operations like file I/O and DNS). It abstracts the underlying OS differences (IOCP on Windows, epoll on Linux, kqueue on Mac) to provide a unified API.

46. Explain the EventEmitter class in Node.js.

The EventEmitter is a core class that facilitates communication between objects in Node.js. An instance can register listeners using .on('eventName', callback) and trigger them using .emit('eventName', data). It is synchronous by default (listeners run immediately when emitted) and is the parent class for many core modules like Stream and HTTP.

47. What is the usage of the Buffer class in Node.js?

The Buffer class handles raw binary data. JavaScript strings are UTF-8, which isn't efficient for binary streams (images, compressed files, TCP streams). Buffers are fixed-length chunks of memory allocated outside the V8 garbage-collected heap, allowing efficient interaction with the OS and other lower-level system components.

48. Explain Punycode in Node.js.

Punycode is an encoding syntax used to convert Unicode characters (like emojis or foreign scripts) into a restricted ASCII character set (A-Z, 0-9). It was historically used in Node.js for converting Internationalized Domain Names (IDNs) so DNS servers could understand them. Note: The core Punycode module is deprecated in recent Node versions in favor of the WHATWG URL API.

49. Name some of the exit codes of Node.js.

  • 0: Successful termination.
  • 1: Uncaught Fatal Exception (generic error).
  • 5: Fatal Error (V8 engine error).
  • 9: Invalid Argument (when an unknown option is specified).
  • 12: Signal Violation (process forced to terminate, e.g., SIGKILL). Understanding these helps in debugging crashed containers or CI pipelines.

50. How does Node.js handle child threads?

Traditionally, Node.js uses child_process to spawn new processes (heavyweight). For actual threading, it provides the Worker Threads module (worker_threads). Unlike processes, Workers share memory (SharedArrayBuffer), making them efficient for CPU-intensive tasks (like image processing) without blocking the main event loop.

51. If Node.js is single-threaded, how does it handle concurrency effectively?

Node.js handles concurrency using the Event Loop and the Reactor Pattern. While the main JavaScript execution runs on a single thread, heavy lifting (I/O operations, cryptography, file system tasks) is offloaded to the C++ APIs (libuv) which utilize the OS kernel's asynchronous capabilities or a pool of worker threads. The main thread is never blocked; it registers a callback and moves on. When the background task finishes, the callback is queued for execution, allowing Node to handle thousands of concurrent connections efficiently.

52. Deep dive into the Node.js Event Loop: How does it function?

The Event Loop is the mechanism that allows Node.js to perform non-blocking I/O operations. It constantly monitors the Call Stack and the Callback Queue. If the Call Stack is empty, it dequeues the first event from the queue and pushes it to the stack. The loop has specific phases (Timers, Pending Callbacks, Poll, Check, Close Callbacks) that determine the priority of operations. For example, 'process.nextTick' runs immediately after the current operation, while 'setImmediate' runs in the Check phase.

53. What is the execution order of control flow statements in Node.js (Event Loop phases)?

The execution order generally follows:

  1. Synchronous code (Main Stack),
  2. process.nextTick() (Microtask),
  3. Promise callbacks (Microtask),
  4. Timers (setTimeout, setInterval),
  5. I/O Callbacks (network, file system),
  6. setImmediate (Check phase),
  7. Close handlers. Understanding this order is crucial for debugging race conditions and ensuring code runs when expected.

54. What are Streams in Node.js and what are the different types?

Streams are objects that let you read data from a source or write data to a destination in continuous chunks, rather than loading the entire dataset into memory at once. This is crucial for handling large files or network data. There are four types: Readable (reading data), Writable (writing data), Duplex (both readable and writable, like sockets), and Transform (modifying data as it is written/read, like compression).

55. What is the technical difference between setImmediate() and process.nextTick()?

The naming is counter-intuitive. process.nextTick() fires immediately after the current operation completes, before the Event Loop continues to the next phase. It has the highest priority and can starve the event loop if misused. setImmediate() fires in the 'Check' phase of the Event Loop, technically after I/O events. So, nextTick runs before setImmediate.

56. What is the difference between spawn() and fork() in the Child Process module?

spawn() creates a new process and launches a command (like a shell command); it streams data (stdout/stderr) and is memory efficient for large data transfers. fork() is a special case of spawn designed specifically for Node.js processes. It establishes a separate communication channel (IPC) allowing the parent and child to exchange messages via send() and on('message'), making it ideal for splitting heavy computation tasks.

57. What is the Cluster module in Node.js?

The Cluster module allows Node.js to create child processes (workers) that run simultaneously and share the same server port. Since Node is single-threaded, a single instance only uses one CPU core. Clustering enables the application to fork itself X times (usually equal to the number of CPU cores) to maximize hardware utilization and handle more concurrent load.

58. Explain some common Cluster methods.

cluster.fork() creates a new worker process. cluster.isMaster returns true if the current process is the master. cluster.isWorker returns true for worker processes. cluster.on('exit') is an event listener used to detect when a worker dies (so you can restart it to maintain uptime).

59. What is the precise difference between setImmediate() and setTimeout()?

setTimeout() schedules a script to be run after a minimum threshold (in ms). setImmediate() is designed to execute a script once the current poll phase completes. If both are called within a main module, the order is non-deterministic (bound by process performance). However, if called within an I/O cycle, setImmediate is guaranteed to run before setTimeout.

60. Puzzle: What is the security flaw in this API key check? if (apiKeyFromDb === apiKeyReceived) ...

This is vulnerable to a Timing Attack. Standard string comparison returns as soon as a character mismatch is found. An attacker can measure the time it takes for the server to respond to guess the key character by character. The solution is to use crypto.timingSafeEqual() which compares buffers in constant time regardless of the content.

61. Puzzle: What is the output of this Promise chain? Promise.resolve(1).then(x => x + 1).then(x => { throw new Error() }).catch(() => 1).then(x => x + 1).then(console.log)

The output is 2. Explanation:

  1. Start with 1.
  2. Add 1 -> becomes 2.
  3. Error is thrown.
  4. Catch block handles error and returns 1 (resurrecting the chain).
  5. Next then adds 1 to the caught value (1+1) -> becomes 2.
  6. Logs 2.

62. What is an Event Emitter in Node.js?

It is a core class (events module) that facilitates communication between objects in Node.js. An object inheriting from EventEmitter can emit named events (emit('eventName', data)), and other parts of the application can listen and react to them (on('eventName', callback)). It is the basis for streams, HTTP servers, and many other Node components.

63. How can you enhance Node.js performance using Clustering?

Since Node is single-threaded, it ignores multi-core CPUs by default. The cluster module allows you to fork the main process into multiple worker processes (one per CPU core). These workers share the same TCP port, allowing the OS to load-balance incoming traffic across all available cores, instantly multiplying throughput.

64. What is the Thread Pool and which library handles it in Node.js?

The Thread Pool is a pool of C++ threads (default 4) used to execute blocking operations (Filesystem, Crypto, Compression, DNS). It is managed by the Libuv library. Unlike the main JS thread, these threads can run in parallel.

65. What is WASI and why is it being introduced to Node.js?

WASI (WebAssembly System Interface) is a standard that allows WebAssembly to run outside the browser in a secure, portable way. It allows Node.js to execute compiled binaries (from Rust, C++, etc.) at near-native speed safely, providing a high-performance alternative to native C++ addons (node-gyp).

66. How are Worker Threads different from Clusters?

Clusters create separate processes (new V8 instance, new memory) which is heavy but provides isolation. Worker Threads run in the same process and share memory (via SharedArrayBuffer). Workers are better for CPU-intensive data processing, while Clusters are better for scaling HTTP traffic handling.

67. How do you measure the duration/performance of async operations?

We use the perf_hooks module for high-precision timing. performance.now() gives a timestamp. Alternatively, console.time('label') and console.timeEnd('label') is a simple way to log duration. For deep profiling, we use the built-in --prof flag or Chrome DevTools.

68. Explain the Node.js 'Reactor Pattern' and how it enables non-blocking I/O.

The Reactor Pattern is the heart of Node.js's asynchronous architecture. It works by having an 'Event Demultiplexer' (via Libuv) that listens for I/O requests (like file reads or network calls) and delegates them to the OS kernel. Instead of blocking the main thread while waiting for a response, the main thread continues executing. When the OS finishes the task, it enqueues an event into the 'Event Queue'. The Event Loop then picks up this event and executes the associated callback handler. This allows a single-threaded process to handle thousands of concurrent connections efficiently, provided the callbacks are non-blocking.

69. How does the V8 engine optimize JavaScript execution in Node.js? Explain JIT compilation.

V8 uses Just-In-Time (JIT) compilation to execute JavaScript. Unlike traditional interpreters that execute code line-by-line, V8 compiles JavaScript directly into native machine code at runtime. It uses two compilers: 'Ignition' (an interpreter that generates bytecode) and 'TurboFan' (an optimizing compiler). V8 monitors code execution (profiling); if a function is run frequently ('hot code'), TurboFan optimizes it by making assumptions about types. If those assumptions fail (e.g., a variable type changes), it 'de-optimizes' back to bytecode. A senior developer writes consistent, typed-stable code to maximize these optimizations.

70. How do you handle 'uncaught exceptions' and 'unhandled rejections' in a production Node.js service?

While you can listen for uncaughtException and unhandledRejection events to log errors, the Node.js process is in an undefined state after such an event. The only safe senior-level strategy is to:

  1. Catch the error and log it (using a structured logger like Pino/Winston),
  2. Gracefully close all server connections and database pools, and
  3. Exit the process (process.exit(1)). Rely on a process manager (like PM2, Kubernetes, or Docker) to automatically restart the service, ensuring it returns to a clean, known state.

71. Explain the concept of 'Clustering' in Node.js. How does it utilize multi-core systems?

Since Node.js is single-threaded, a single instance runs on only one CPU core. The cluster module allows you to fork the main process into multiple 'worker' processes (typically one per core). These workers share the same TCP connection/port but run their own Event Loops and memory spaces. The OS distributes incoming traffic across these workers. This effectively multiplies the application's throughput. However, state (like user sessions) cannot be shared in memory; it must be externalized to a store like Redis.

72. What are Worker Threads (worker_threads), and how do they differ from Clustering?

While Clustering creates entirely new processes (with separate memory), Worker Threads create new threads within the same process instance. Workers share memory (via SharedArrayBuffer) and are lighter weight than processes. Use Clustering for scaling I/O-bound web servers across cores. Use Worker Threads for parallelizing CPU-heavy tasks (like cryptography, compression, or complex JSON parsing) within a single application instance without blocking the main Event Loop.

73. What is WASI (WebAssembly System Interface) and why is it relevant to the future of Node.js?

WASI is a standard that provides WebAssembly (Wasm) modules with secure, standardized access to system resources (like files and networking) outside the browser. For Node.js, this allows developers to run highly performant code written in Rust, C++, or Go seamlessly within a Node environment, with near-native speed and a sandboxed security model. It represents the future of portable, high-performance server-side modules.

74. How do you mitigate Memory Leaks in Node.js applications?

Memory leaks often occur due to global variables, uncleared intervals/timers, or closures holding onto references unnecessarily. Detection involves using the Chrome DevTools 'Memory' tab (via --inspect) to take heap snapshots and compare them. In code, avoid global state, ensure all event listeners are removed when no longer needed (using .off or once), and use WeakMaps for caching object references that should be garbage collected.

75. Explain 'Punycode' in Node.js and its use case.

Punycode is an encoding scheme used to convert Unicode characters (like emojis or international domain names) into a restricted ASCII character set supported by the Domain Name System (DNS). In Node.js, the punycode module (now deprecated in favor of WHATWG URL API) was used to handle these conversions. It ensures that a domain like münchen.de can be correctly resolved by DNS servers that only understand ASCII.

76. What is the purpose of the crypto module, and does Node.js support FIPS mode?

The crypto module provides cryptographic functionality including OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. It is essential for hashing passwords (via pbkdf2 or scrypt) and encrypting data. Yes, Node.js can be compiled and run in FIPS mode (Federal Information Processing Standards), which forces the application to use only FIPS-compliant cryptographic algorithms, a requirement for many government and financial sector applications.

77. How does Libuv manage the thread pool, and what is the default pool size?

Libuv uses a thread pool to handle 'expensive' tasks that cannot be handled by the OS kernel asynchronously (specifically: file system operations, DNS lookups, and some crypto functions). The default pool size is 4 threads. If you have a high volume of these specific operations, they can become a bottleneck. You can increase this limit by setting the UV_THREADPOOL_SIZE environment variable (up to 1024) to improve throughput for these specific blocking tasks.

78. Explain the concept of 'Tracing' in Node.js.

Tracing allows you to collect detailed performance information about the execution of a Node.js application. Enabled via the trace_events module (or --trace-event-categories flag), it generates logs compatible with the Chrome DevTools trace viewer. This captures low-level details like V8 engine garbage collection events, async hooks, and file system sync/async calls. It is a powerful tool for diagnosing complex performance issues and understanding exactly where the Event Loop is spending its time.

79. What are async_hooks in Node.js?

async_hooks is a core module that provides an API to track the lifetime of asynchronous resources. It allows you to register callbacks (init, before, after, destroy) that trigger whenever an async resource (like a simplified Promise or TCPWRAP) is created or executed. This is the underlying technology used by Application Performance Monitoring (APM) tools (like New Relic or Datadog) to trace transactions across asynchronous boundaries and maintain 'Context Local Storage'.

80. How does Node.js handle DNS lookups, and why can this be a performance bottleneck?

Unlike most I/O in Node.js, dns.lookup() (which is used by default for http.request) is synchronous at the system library level (specifically getaddrinfo). To prevent blocking the main thread, Libuv offloads these lookups to its thread pool. Since the thread pool has a default size of 4, a high volume of DNS lookups can exhaust the pool, blocking other file system operations that also need the pool. A solution for high-load systems is to use a custom DNS resolver or increase UV_THREADPOOL_SIZE.

81. What is 'Piping' in Node.js Streams and how does it relate to 'Backpressure'?

Piping is a mechanism to connect the output of one stream (Readable) directly to the input of another (Writable), using readStream.pipe(writeStream). It manages the data flow automatically. Crucially, it handles 'Backpressure': if the Writable stream (e.g., a slow disk write) cannot keep up with the Readable stream (e.g., a fast network download), pipe() automatically pauses the reading until the writing buffer drains. This prevents memory overflow (RAM spikes) without requiring manual flow control logic.

While Node.js is excellent at handling application logic, it is not optimized for serving static assets, handling SSL/TLS termination, or managing load balancing across multiple processes. Nginx sits in front of Node.js to:

  1. Serve static files (CSS/Images) much faster,
  2. Handle HTTPS encryption (offloading CPU work from Node),
  3. Compress responses (Gzip), and
  4. Route traffic to different Node.js instances (Load Balancing). This protects the Node server from direct internet traffic exposure.

83. How does MongoDB Connection Pooling work in a Node.js driver, and why is it important?

Opening a new database connection for every incoming HTTP request is expensive (requires TCP handshake + Auth) and slow. Connection Pooling maintains a cache of open, reusable connections. When a request comes in, the driver borrows a connection from the pool, performs the query, and returns the connection to the pool rather than closing it. This drastically increases throughput. In the MongoDB Node.js driver, the pool size is configurable (default is usually 100), and managing this size is key to scaling.

84. What is 'Event Loop Starvation' and how can you prevent it?

Starvation occurs when a CPU-intensive task (like calculating a Fibonacci sequence or image processing) occupies the main thread for too long. Since the Event Loop is single-threaded, it cannot process new I/O events, timers, or network requests during this time, making the server appear 'frozen'. Prevention strategies involve:

  1. Offloading heavy tasks to Worker Threads,
  2. Using setImmediate to break long loops into smaller chunks, or
  3. Moving the task to a dedicated microservice.

85. What is the V8 Heap and how do you monitor it?

The Heap is the memory segment where V8 stores objects and dynamic data (Strings, Closures). It is garbage collected. If the Heap fills up (default limit is ~1.5GB on 64-bit systems), the process crashes with 'Out of Memory'. You monitor it using process.memoryUsage(), which reports heapTotal and heapUsed. For deep analysis, you generate 'Heap Snapshots' using the Chrome DevTools Inspector to find memory leaks.

86. How do you handle 'Circular Dependencies' in Node.js modules, and why are they problematic?

Circular dependencies occur when Module A requires Module B, and Module B requires Module A. In Node.js (CommonJS), this results in one of the modules receiving an incomplete (partially loaded) copy of the other, often causing undefined function errors. To resolve this, a senior developer would: 1) Refactor the shared logic into a separate third module (Module C) that both A and B import, or 2) Use dependency injection to pass the required module at runtime rather than import time.

87. What is the vm module in Node.js, and when would you use it?

The vm (Virtual Machine) module allows you to compile and run code within V8 contexts immediately. It provides a way to execute JavaScript code in a 'sandboxed' environment, separate from the main application scope. It is often used for running untrusted code, building plugin systems, or template engines. However, it is not a security mechanism by itself (it is not fully isolated like a Docker container), so running hostile code in vm can still be dangerous without further restrictions.

88. How does Node.js handle Keep-Alive connections, and why does it matter for performance?

HTTP Keep-Alive allows a single TCP connection to remain open for multiple HTTP requests/responses, reducing the overhead of the TCP 3-way handshake. Node.js's default HTTP agent (prior to v19) did not enable Keep-Alive by default, creating a new connection for every request. In high-throughput microservices, this exhausts ephemeral ports. A senior developer explicitly configures the http.Agent with keepAlive: true to reuse connections, significantly reducing latency and CPU usage.

89. How do you manage database transactions (ACID) in Node.js with MongoDB or SQL?

In a non-blocking environment, ensuring atomicity is critical. In SQL (Sequelize/TypeORM), you use 'Managed Transactions' where a callback receives a transaction object (t), and queries pass this object; if the callback throws, the transaction rolls back automatically. In MongoDB (Mongoose), you use startSession() and session.withTransaction(). A senior developer knows that transactions require a Replica Set in MongoDB and understands the performance trade-off (locking) involved.

90. What is the cluster module's scheduling policy (Round Robin vs OS)?

The scheduling policy determines how incoming connections are distributed to worker processes. On Windows, Node.js defaults to letting the OS handle distribution, which often leads to uneven loads (some workers idle, others overloaded). On Linux/macOS, Node.js defaults to RR (Round Robin), where the master process accepts connections and distributes them evenly to workers. You can force a specific policy using cluster.schedulingPolicy.

91. Why would you use a Message Queue (RabbitMQ/Kafka) instead of direct HTTP calls between microservices?

Direct HTTP calls (REST) create tight coupling; if the Receiver service is down or slow, the Sender fails or hangs (cascading failure). Message Queues introduce asynchronous decoupling. The Sender pushes a message to the queue and continues immediately. The Receiver processes messages at its own pace. This ensures durability (messages persist if Receiver crashes), load smoothing (handling traffic spikes without crashing), and scalability.

92. How does TLS/SSL work in Node.js?

The tls module provides the implementation for Transport Layer Security. In production, Node.js is capable of handling SSL termination (using https.createServer with .key and .cert files). However, in practice, it is computationally expensive (handshakes involve heavy math). Therefore, it is best practice to offload SSL termination to a Reverse Proxy (Nginx/HAProxy) or Cloud Load Balancer, allowing the Node.js process to speak plain HTTP over the private network.

93. What is the Reflect API in JavaScript?

Reflect is a built-in object that provides methods for interceptable JavaScript operations. It mirrors the methods of the Proxy handler object. It is used to forward default operations in Proxies (e.g., Reflect.get(), Reflect.set()) or to perform meta-programming tasks like Reflect.has() (checking property existence) in a functional style. It standardizes operations that were previously disparate (like delete operator vs Object.defineProperty).

94. What is the net module in Node.js, and how does it differ from the http module?

The net module is a lower-level API used to create raw TCP (Transmission Control Protocol) servers and clients. It deals with streams of binary data. The http module is built on top of net and adds logic to parse HTTP headers, methods, and body encoding. A senior developer uses net for building custom protocols (like a chat server, database driver, or IoT communication) where the overhead of HTTP headers is unnecessary.

95. Explain the concept of 'Garbage Collection' (GC) in Node.js. How does 'Mark-and-Sweep' work?

V8 manages memory automatically. 'Mark-and-Sweep' is the primary algorithm for old-generation garbage collection. 1) Mark: The GC starts from 'Root' references (global variables, active closures) and traverses the object graph, marking every reachable object as 'alive'. 2) Sweep: It scans the memory heap and frees any memory block not marked (i.e., unreachable objects). This process pauses execution (Stop-The-World), so minimizing object churn is key to preventing latency spikes.

96. What are 'Exit Codes' in Node.js? Why are 128+ codes significant?

Exit codes communicate the process termination status to the OS. 0 is success, 1 is uncaught fatal exception. Codes above 128 indicate the process was killed by a signal. The formula is 128 + Signal Number. For example, SIGKILL (signal 9) results in exit code 137 (128+9). Seeing exit code 137 in Kubernetes or Docker logs immediately tells a senior developer the container was killed for using too much memory (OOMKilled).

97. How do you generate a 'Heap Snapshot' in a running production server without crashing it?

You can generate a snapshot using the v8 module: require('v8').writeHeapSnapshot(), or by sending a signal (like SIGUSR2) if configured. However, taking a snapshot pauses the main thread (blocking requests for seconds). A senior strategy is to remove the server from the Load Balancer rotation, trigger the snapshot, offload the file to S3, and then bring it back online (or kill/restart it), ensuring users are not affected by the pause.

98. What is the cluster module's default load balancing strategy?

On non-Windows platforms, the default is Round Robin (the primary process accepts connections and distributes them). On Windows, it defaults to the OS handling the distribution (which can be uneven). You can override this by setting cluster.schedulingPolicy = cluster.SCHED_RR or SCHED_NONE. Knowing this is vital when debugging why one worker is at 100% CPU while others are idle on a Windows server.

99. What are the arguments for async.queue (from the async library), and is this library still relevant?

async.queue(worker, concurrency) takes two arguments: a worker function (which processes tasks) and a concurrency integer (how many workers run in parallel). While the async library was essential in the callback era, modern Node.js developers often replace it with native solutions like p-limit (for concurrency control with Promises) or simple for...of loops with await. However, async.queue is still valid for complex task processing pipelines.

100. Behavioral: Why do you think you are the right fit for a Senior Node.js role?

Example Senior Answer: 'Beyond just knowing the syntax, I understand the ecosystem and trade-offs. I've scaled Node.js apps from single instances to clustered microservices. I know when not to use Node (e.g., heavy CPU computation) and how to debug memory leaks in production. I prioritize maintainability—writing clean, testable code and documenting APIs—so the team can move fast without breaking things. I can mentor juniors on async patterns and security best practices.'

101. Behavioral: Describe a challenging technical problem you solved in Node.js.

Strategy: Pick a specific, complex scenario. Example: 'We had an issue where our API latency spiked every hour. I used clinic.js and heap snapshots to diagnose a memory leak caused by a closure retaining database connections. I refactored the connection logic to use a proper pool and implemented a graceful shutdown strategy. This reduced latency by 90% and eliminated the hourly crashes.'

102. Describe the phases of the Node.js Event Loop and how it orchestrates asynchronous operations.

The Event Loop is the mechanism that allows Node.js to perform non-blocking I/O operations. It consists of several phases, handled by the libuv library. The primary phases are: 1) Timers: Executes callbacks scheduled by setTimeout() and setInterval(). 2) Pending Callbacks: Executes I/O callbacks deferred to the next loop iteration. 3) Idle, Prepare: Internal use only. 4) Poll: Retrieves new I/O events; executes I/O related callbacks. 5) Check: Executes setImmediate() callbacks. 6) Close Callbacks: Handles close events (e.g., socket.on('close', ...)). Understanding these phases is critical for debugging timing issues and understanding the execution order of macro-tasks versus micro-tasks (like Promises and process.nextTick).

Last updated on

On this page

Hard Common Node.js Interview Questions1. How would you architect a scalable Node.js application?2. Why is Node.js a good fit for Microservices?3. How do you handle inter-service communication?4. Pros and Cons of Serverless with Node.js.5. How do you implement Observability in production?6. Best practices for CPU-intensive tasks?7. How do you optimize a high-traffic API?8. Why use TypeScript in Node.js projects?9. Describe a robust CI/CD pipeline for Node.js.10. Deep Dive: exec() vs spawn() regarding memory.11. How do you prevent Event Loop starvation?12. How do you secure HTTP headers?13. Role of OpenTelemetry?14. How do you handle Unhandled Exceptions/Rejections?15. How does the Node.js Process Model affect Scalability?16. What is require.resolve used for?17. High Availability and Load Balancing strategies.18. Explain the Reactor Pattern.19. Significance of the zlib module?20. Net vs Dgram modules.21. How to optimize Docker for Node.js?22. How to implement distributed Rate Limiting?23. Compare Bun/Deno with Node.js.24. Managing Database Connections efficiently.25. When to use Message Queues (RabbitMQ/Kafka)?26. Performance trade-offs of Async/Await?27. Caching strategies in Node.js.28. What is esbuild?29. Event Delegation in Node.js?30. How to write V8-optimized JavaScript?31. JWT Best Practices.32. Handling I18n/L10n in backend.33. Server-Side Rendering (SSR) trade-offs.34. Monolith vs Microservices: Decision Criteria.35. DDoS protection for Node.js servers.36. Utility of util.promisify.37. Why is Telemetry vital?38. Promise.all() vs Promise.race() use cases.39. Deep Dive: setImmediate vs setTimeout(fn, 0).40. How to stay current in the Node ecosystem?41. What are the different core modules used in Node.js?42. How is clustering used to enhance the performance of Node.js?43. Is cryptography supported in Node.js?44. Why should Express 'app' and 'server' creation be kept separate?45. What is libuv in Node.js?46. Explain the EventEmitter class in Node.js.47. What is the usage of the Buffer class in Node.js?48. Explain Punycode in Node.js.49. Name some of the exit codes of Node.js.50. How does Node.js handle child threads?51. If Node.js is single-threaded, how does it handle concurrency effectively?52. Deep dive into the Node.js Event Loop: How does it function?53. What is the execution order of control flow statements in Node.js (Event Loop phases)?54. What are Streams in Node.js and what are the different types?55. What is the technical difference between setImmediate() and process.nextTick()?56. What is the difference between spawn() and fork() in the Child Process module?57. What is the Cluster module in Node.js?58. Explain some common Cluster methods.59. What is the precise difference between setImmediate() and setTimeout()?60. Puzzle: What is the security flaw in this API key check? if (apiKeyFromDb === apiKeyReceived) ...61. Puzzle: What is the output of this Promise chain? Promise.resolve(1).then(x => x + 1).then(x => { throw new Error() }).catch(() => 1).then(x => x + 1).then(console.log)62. What is an Event Emitter in Node.js?63. How can you enhance Node.js performance using Clustering?64. What is the Thread Pool and which library handles it in Node.js?65. What is WASI and why is it being introduced to Node.js?66. How are Worker Threads different from Clusters?67. How do you measure the duration/performance of async operations?68. Explain the Node.js 'Reactor Pattern' and how it enables non-blocking I/O.69. How does the V8 engine optimize JavaScript execution in Node.js? Explain JIT compilation.70. How do you handle 'uncaught exceptions' and 'unhandled rejections' in a production Node.js service?71. Explain the concept of 'Clustering' in Node.js. How does it utilize multi-core systems?72. What are Worker Threads (worker_threads), and how do they differ from Clustering?73. What is WASI (WebAssembly System Interface) and why is it relevant to the future of Node.js?74. How do you mitigate Memory Leaks in Node.js applications?75. Explain 'Punycode' in Node.js and its use case.76. What is the purpose of the crypto module, and does Node.js support FIPS mode?77. How does Libuv manage the thread pool, and what is the default pool size?78. Explain the concept of 'Tracing' in Node.js.79. What are async_hooks in Node.js?80. How does Node.js handle DNS lookups, and why can this be a performance bottleneck?81. What is 'Piping' in Node.js Streams and how does it relate to 'Backpressure'?82. Why is a Reverse Proxy (like Nginx) recommended in front of a Node.js server in production?83. How does MongoDB Connection Pooling work in a Node.js driver, and why is it important?84. What is 'Event Loop Starvation' and how can you prevent it?85. What is the V8 Heap and how do you monitor it?86. How do you handle 'Circular Dependencies' in Node.js modules, and why are they problematic?87. What is the vm module in Node.js, and when would you use it?88. How does Node.js handle Keep-Alive connections, and why does it matter for performance?89. How do you manage database transactions (ACID) in Node.js with MongoDB or SQL?90. What is the cluster module's scheduling policy (Round Robin vs OS)?91. Why would you use a Message Queue (RabbitMQ/Kafka) instead of direct HTTP calls between microservices?92. How does TLS/SSL work in Node.js?93. What is the Reflect API in JavaScript?94. What is the net module in Node.js, and how does it differ from the http module?95. Explain the concept of 'Garbage Collection' (GC) in Node.js. How does 'Mark-and-Sweep' work?96. What are 'Exit Codes' in Node.js? Why are 128+ codes significant?97. How do you generate a 'Heap Snapshot' in a running production server without crashing it?98. What is the cluster module's default load balancing strategy?99. What are the arguments for async.queue (from the async library), and is this library still relevant?100. Behavioral: Why do you think you are the right fit for a Senior Node.js role?101. Behavioral: Describe a challenging technical problem you solved in Node.js.102. Describe the phases of the Node.js Event Loop and how it orchestrates asynchronous operations.