THN Interview Prep

Worker Pool (Concurrency)

Intent / problem it solves

Bound parallelism and reuse a fixed set of workers that pull tasks from a queue. Prevents unbounded goroutine/thread growth and smooths load.

When to use / when NOT

Use for CPU or I/O work with a natural cap (image thumbnailing, HTTP fan-out to backends).

Avoid when tasks need strict per-tenant ordering without sharding—consider per-key executors; for CPU-bound work, size pools to cores.

Structure

Producers enqueue jobs; pool of workers consumes; results may return via channels or callbacks.

Loading diagram…

Go example

package main

import (
	"fmt"
	"sync"
)

type Task struct {
	Label string
}

func runPool(taskCount int, workerCount int) {
	tasks := make(chan Task, taskCount)
	var group sync.WaitGroup
	for range make([]struct{}, workerCount) {
		group.Add(1)
		go func() {
			defer group.Done()
			for task := range tasks {
				fmt.Println("done", task.Label)
			}
		}()
	}
	for index := 0; index < taskCount; index++ {
		tasks <- Task{Label: fmt.Sprintf("task-%d", index)}
	}
	close(tasks)
	group.Wait()
}

func main() { runPool(4, 2) }

JavaScript example

async function runPool(jobs, workerCount) {
  const queue = [...jobs];
  const workers = Array.from({ length: workerCount }, async () => {
    while (queue.length > 0) {
      const next = queue.shift();
      if (next) {
        await next();
      }
    }
  });
  await Promise.all(workers);
}

runPool(
  [async () => console.log('a'), async () => console.log('b')],
  2,
);

Interview phrase

“Worker pool caps concurrency: producers never spawn unbounded threads; I tune pool size to CPU, I/O latency, and back-pressure on the task queue.”

Pair with LLD for job runners, download managers, or batch processors in LLD case studies. See also rate limiting.

Last updated on

Spotted something unclear or wrong on this page?

On this page