THN Interview Prep

Pub/Sub (Concurrency Angle)

Intent / problem it solves

Decouple publishers from subscribers via asynchronous channels so producers never block on slow consumers beyond bounded-buffer policy. At runtime this is the concurrency-friendly cousin of Observer.

When to use / when NOT

Use for cross-thread dispatch, stream processing ingress, or protecting callers from downstream spikes with queues.

Avoid when you need strong ordering across all events unless partitions are designed; mind back-pressure and poison messages.

Structure

Brokers route messages; consumers run concurrently per subscription; buffers absorb bursts until limits trigger drops or blocking.

Loading diagram…

Go example

package main

import (
	"fmt"
	"sync"
)

func main() {
	messages := make(chan string, 2)
	var group sync.WaitGroup
	group.Add(1)
	go func() {
		defer group.Done()
		for payload := range messages {
			fmt.Println("worker", payload)
		}
	}()
	messages <- "alpha"
	messages <- "beta"
	close(messages)
	group.Wait()
}

JavaScript example

function createBus() {
  const handlers = new Set();
  return {
    publish(event) {
      for (const handler of handlers) {
        queueMicrotask(() => handler(event));
      }
    },
    subscribe(handler) {
      handlers.add(handler);
      return () => handlers.delete(handler);
    },
  };
}

const bus = createBus();
bus.subscribe((event) => console.log('first', event.kind));
bus.subscribe((event) => console.log('second', event.kind));
bus.publish({ kind: 'tick' });

Interview phrase

“Pub/sub at the concurrency layer uses bounded queues and independent consumer loops so spikes do not stall publishers—pair with metrics on lag and DLQs.”

Connect to LLD for notification routers, domain events, or websocket fan-out in LLD case studies. See pub/sub building block and message queue vs stream.

Last updated on

Spotted something unclear or wrong on this page?

On this page