Time & Space Complexity — Reference
Self-contained Big-O reference for common algorithms and data structures. Average and worst are called out when they differ.
Sorting
| Algorithm | Time (typical) | Time (worst) | Space | Notes |
|---|---|---|---|---|
| Merge sort | O(n log n) | O(n log n) | O(n) | Stable; predictable; not in-place for array |
| Quicksort | O(n log n) (expected) | O(n^2) | O(log n) stack (avg) / O(n) (worst) | In-place; pivot choice matters (randomized, median-of-three) |
| Heapsort | O(n log n) | O(n log n) | O(1) | In-place; not stable (typical impl) |
n = number of elements.
Heap operations (binary min/max heap, n items)
| Operation | Time | Space |
|---|---|---|
insert / push | O(log n) | O(1) |
extractMin / pop | O(log n) | O(1) |
peek | O(1) | O(1) |
buildHeap (heapify all) | O(n) | O(1) |
buildHeap is linear because most nodes are near leaves (tight analysis via level sums).
Hash map (separate chaining or open addressing)
| Operation | Average | Worst (pathological / many collisions) |
|---|---|---|
get | O(1) | O(n) |
put | O(1) | O(n) |
delete | O(1) | O(n) |
Worst case assumes bad hash, adversarial input, or resize/rehash pathologies; good universal hashing + load factor bound keeps expected time near O(1).
Space: typically O(n) for n keys; trie-like string keys are a different model (see below).
Binary search tree (unbalanced)
| Operation | Average | Worst (skewed tree) |
|---|---|---|
search | O(log n) | O(n) |
insert | O(log n) | O(n) |
delete | O(log n) | O(n) |
Balanced BSTs (AVL, red-black, B-tree): height O(log n) ⇒ all standard ops worst O(log n).
Trie (prefix tree)
- Insert / lookup / delete for a string of length
m:O(m)alphabet size affects branching factor, not length exponent. - Space:
O(totalChars)in compact tries; can blow up with dense alphabets without compression.
Union-find (disjoint set union, DSU)
With union by rank (or size) and path compression (or path halving):
| Operation | Amortized per op |
|---|---|
find | inverse Ackermann α(n) — effectively constant |
union | same α(n) |
Amortization notes (DSU)
- Path compression flattens trees during
find, paying extra work once to make futurefindcheaper. - Union by rank/size keeps trees shallow so compression stays effective.
- Single operations can still be
O(log n)in some analyses without compression; the amortized bound is what interviewers mean by “almost constant.” - No path compression alone can degrade; pairing both heuristics is standard for
α(n).
Space: O(n) for n elements.
Graph algorithms
Let V = vertices, E = edges.
| Algorithm | Time | Space |
|---|---|---|
| Floyd–Warshall | O(V^3) | O(V^2) |
| Dijkstra (non-negative), binary heap | O((V + E) log V) | O(V) |
| Bellman–Ford | O(V * E) | O(V) |
| Kruskal (MST) | O(E log E) sort + α(V) DSU | O(E) or O(V) |
| Prim (MST), binary heap | O((V + E) log V) | O(V + E) |
Fibonacci heap improves Dijkstra’s theoretical time; binary heap is the usual interview baseline.
Recursion depth vs explicit heap memory
| Aspect | Recursion (call stack) | Iterative + explicit stack/queue |
|---|---|---|
| Depth limit | Language/runtime stack cap (often ~10^4–10^6 frames order-of-magnitude); deep trees/graphs risk stack overflow | Uses heap-allocated structures; depth limited by available memory, not small fixed stack |
| Space per frame | locals + return addresses on stack | container growth on heap (often larger total addressable space) |
| Tradeoff | Elegant for DFS/backtracking; watch depth | Slightly more code; safer for very deep paths |
Rule of thumb: depth d recursion costs O(d) stack space; iterative BFS/DFS with queue/stack costs O(width) or O(d) depending on structure. Prefer iterative or increase stack (where allowed) when recursion depth rivals input size.
See also: Data structure operations, Pattern recognition cheatsheet.
Last updated on
Spotted something unclear or wrong on this page?