THN Interview Prep

Fan-Out / Fan-In (Concurrency)

Intent / problem it solves

Fan-out starts parallel work (map over shards, call many services). Fan-in merges results safely (channels, Promise.all, errgroup). Maximizes throughput while collecting a single response or summary.

When to use / when NOT

Use for partitionable work, partial aggregations, or hedged requests with merge.

Avoid when ordering matters globally without partitioning keys, or when duplicate side effects make parallel calls unsafe—ensure idempotency first (idempotency).

Structure

Coordinator launches workers; each returns partial; aggregator reduces under synchronization.

Loading diagram…

Go example

package main

import (
	"fmt"
	"sync"
)

func fetchShard(shardId int) int {
	return shardId * 10
}

func main() {
	shards := []int{1, 2, 3}
	var group sync.WaitGroup
	results := make(chan int, len(shards))
	for _, shardId := range shards {
		group.Add(1)
		go func(identifier int) {
			defer group.Done()
			results <- fetchShard(identifier)
		}(shardId)
	}
	go func() {
		group.Wait()
		close(results)
	}()
	total := 0
	for value := range results {
		total += value
	}
	fmt.Println(total)
}

JavaScript example

async function fetchShard(shardId) {
  return shardId * 10;
}

async function fanIn(shardIds) {
  const partial = await Promise.all(shardIds.map((id) => fetchShard(id)));
  return partial.reduce((sum, value) => sum + value, 0);
}

fanIn([1, 2, 3]).then(console.log);

Interview phrase

“Fan-out parallelizes independent shard queries; fan-in aggregates with clear merge semantics and handles partial failures with retries or error lists.”

Map to LLD for search aggregates, multi-region reads, or batch enrichment in LLD case studies. Tie to sharding.

Last updated on

Spotted something unclear or wrong on this page?

On this page