THN Interview Prep

Time & Space Complexity — Reference

Self-contained Big-O reference for common algorithms and data structures. Average and worst are called out when they differ.

Sorting

AlgorithmTime (typical)Time (worst)SpaceNotes
Merge sortO(n log n)O(n log n)O(n)Stable; predictable; not in-place for array
QuicksortO(n log n) (expected)O(n^2)O(log n) stack (avg) / O(n) (worst)In-place; pivot choice matters (randomized, median-of-three)
HeapsortO(n log n)O(n log n)O(1)In-place; not stable (typical impl)

n = number of elements.

Heap operations (binary min/max heap, n items)

OperationTimeSpace
insert / pushO(log n)O(1)
extractMin / popO(log n)O(1)
peekO(1)O(1)
buildHeap (heapify all)O(n)O(1)

buildHeap is linear because most nodes are near leaves (tight analysis via level sums).

Hash map (separate chaining or open addressing)

OperationAverageWorst (pathological / many collisions)
getO(1)O(n)
putO(1)O(n)
deleteO(1)O(n)

Worst case assumes bad hash, adversarial input, or resize/rehash pathologies; good universal hashing + load factor bound keeps expected time near O(1).

Space: typically O(n) for n keys; trie-like string keys are a different model (see below).

Binary search tree (unbalanced)

OperationAverageWorst (skewed tree)
searchO(log n)O(n)
insertO(log n)O(n)
deleteO(log n)O(n)

Balanced BSTs (AVL, red-black, B-tree): height O(log n) ⇒ all standard ops worst O(log n).

Trie (prefix tree)

  • Insert / lookup / delete for a string of length m: O(m) alphabet size affects branching factor, not length exponent.
  • Space: O(totalChars) in compact tries; can blow up with dense alphabets without compression.

Union-find (disjoint set union, DSU)

With union by rank (or size) and path compression (or path halving):

OperationAmortized per op
findinverse Ackermann α(n) — effectively constant
unionsame α(n)

Amortization notes (DSU)

  • Path compression flattens trees during find, paying extra work once to make future find cheaper.
  • Union by rank/size keeps trees shallow so compression stays effective.
  • Single operations can still be O(log n) in some analyses without compression; the amortized bound is what interviewers mean by “almost constant.”
  • No path compression alone can degrade; pairing both heuristics is standard for α(n).

Space: O(n) for n elements.

Graph algorithms

Let V = vertices, E = edges.

AlgorithmTimeSpace
Floyd–WarshallO(V^3)O(V^2)
Dijkstra (non-negative), binary heapO((V + E) log V)O(V)
Bellman–FordO(V * E)O(V)
Kruskal (MST)O(E log E) sort + α(V) DSUO(E) or O(V)
Prim (MST), binary heapO((V + E) log V)O(V + E)

Fibonacci heap improves Dijkstra’s theoretical time; binary heap is the usual interview baseline.

Recursion depth vs explicit heap memory

AspectRecursion (call stack)Iterative + explicit stack/queue
Depth limitLanguage/runtime stack cap (often ~10^410^6 frames order-of-magnitude); deep trees/graphs risk stack overflowUses heap-allocated structures; depth limited by available memory, not small fixed stack
Space per framelocals + return addresses on stackcontainer growth on heap (often larger total addressable space)
TradeoffElegant for DFS/backtracking; watch depthSlightly more code; safer for very deep paths

Rule of thumb: depth d recursion costs O(d) stack space; iterative BFS/DFS with queue/stack costs O(width) or O(d) depending on structure. Prefer iterative or increase stack (where allowed) when recursion depth rivals input size.


See also: Data structure operations, Pattern recognition cheatsheet.

Last updated on

Spotted something unclear or wrong on this page?

On this page