
What is time and space complexity, exactly?
Hi there! WelcomeâI'm so glad you're here. My name's Ava, and Iâll be your guide through Algorithms and Data Structures. A big thanks to everyone who joined our Linkedin Group. Iâm thrilled weâre starting this journey together!
Today, weâre diving into time and space complexityâan essential topic for understanding how efficient your programs are. Letâs break it down step by step.
What is n?
Youâve probably heard terms like âO(n),â ân log n,â or âO(n²)â being tossed around. But what does this mysterious n mean?
When writing programs, n usually refers to the size of the input. It could be:
- The length of an array or string.
- The number of elements in a data structure.
- Any input size your program processes.
For example, if we have a function that multiplies all elements in an array, the size of the array is n:
# Length of the array is "n"
def multiply(arr):
result = 1
for a in arr:
result *= a
return result
Here, the time it takes to run the function depends on n, the size of the input array.
What is time complexity?
Time complexity measures how the runtime of your program changes as the size of the input (n) grows. Think of it as a way to estimate efficiency. Letâs explore a few common types:
O(1): Constant Time Complexity
The best-case scenario! Here, the runtime is independent of the input size. Whether the input has 10 elements or 10 billion, it takes the same amount of time.
Examples of O(1) Operations:
- Accessing an element in an array/list by index
- Getting the length of a list in Python
- Pushing/Popping from the end of a list
- Dictionary (HashMap) lookups and inserts (on average)
- Set operations: adding, removing, checking membership
- Modulo (Remainder) and basic arithmetic operations
def check_cache(cache, target):
# Searching in a hashtable is amortized O(1)
if target in cache:
# Retrieving element from a hashtable is amortized O(1)
return cache[target]
return None
O(log n): Logarithmic time complexity
Logarithms grow sloooow. Like, "watching-paint-dry" slow. But thatâs a good thing when it comes to algorithms!
Hereâs the deal:
- log(1024) â 10
- Square the input? log(1000²) = 2 Ă log(1000) â 20
- Raise it to the fifth power? log(1000âľ) = 5 Ă log(1000) â 50
The key takeaway? Even if the input size is gigantic, the number of steps only creeps up. Logarithmic algorithms are like efficient librarians who donât waste time checking every book when looking for yours.
Example: Binary Search â The Game of Halves
Letâs say youâre searching for 87 in a sorted list of 100 natural numbers. Instead of going one by one like a caveman, you take the "divide and conquer" approach:
- Check the middle (50). Too low!
- Check the middle of the upper half (75). Still too low.
- Middle of 76-100 is 88. Too high!
- Middle of 76-88 is 82. Too low.
- Middle of 83-88 is 85. Still low.
- Middle of 86-88 is 87. Boom! Found it in just 6 steps.
def binary_search(array, start, end, target):
if start >= end:
return -1
middle = (start + end) // 2
if array[middle] == target:
return middle
if array[middle] < target:
return binary_search(array, middle + 1, end, target)
return binary_search(array, start, middle, target)
Instead of checking 100 numbers, we checked only 6. Thatâs the magic of O(log n)âthe bigger the list, the more it saves you time. Each step cuts the remaining work in half.
O(n): Linear Time Complexity
With linear complexity, the runtime grows in direct proportion to the input size. If the input size doubles, the runtime roughly doubles.
Example 1: Summing an Array
If youâre adding up all elements in an array, your program needs to visit each element one by one. This results in O(n) time complexity:
def sum_array(arr):
total = 0
# Loop runs once for each element in arr
for num in arr:
total += num
return total
Example 2: Searching in an Unsorted Array
Suppose youâre searching for a specific value in an unsorted array. In the worst case, youâll have to check every element until you find it (or confirm itâs not there).
def search(arr, target):
# Loop runs n times if the target isnât found
for num in arr:
if num == target:
return True
return False
O(n log n): "An Log An" â A Little Extra Work
You already know that logarithmic time is fastâlike a ninja slicing a problem in half at every step. Now, O(n log n) just means we take that speed and multiply it by n. Itâs still much better than O(n²), but a bit more work than a simple O(n).
How Fast (or Slow) Is It?
For n = 10âš, we get:
nlogâĄn = 10âš Ă logâĄ10âš
Since log(10âš) = 3 log(10Âł) â 30, we get:
10âš Ă logâĄ10âš = 30 Ă 10âš
Thatâs 30 billion stepsânot exactly "instant," but way better than O(n²), which would be 10š⸠steps (yikes!).
Where Do We See O(n log n)?
This complexity is common in efficient sorting algorithms, including:
- Merge Sort â Divide, conquer, and merge.
- Heap Sort â Uses a priority queue (fancy way of saying "organized chaos").
- Quick Sort â When it behaves (i.e., in its average case).
Example: Sort
Good news: You donât need to write your own O(n log n) sorting algorithm! Python's built-in sort function already uses Timsort, which is O(n log n) in most cases.
# Using Timsort under the hood!
sorted(array)
So next time you need to sort a list fast, just let the language do the heavy liftingâbecause O(n log n) is as good as it gets for general-purpose sorting!
O(n²): Quadratic Time Complexity
Quadratic complexity arises when your program performs nested operationsâ for example, when one loop is inside another. If the input size increases by n, the number of operations increases by n².
Example: Processing a Matrix
Imagine summing all the elements in a 2D matrix.
def sum_matrix(matrix):
total = 0
# Outer loop runs n times (rows)
for i in range(len(matrix)):
# Inner loop runs n times (columns)
for j in range(len(matrix[0])):
total += matrix[i][j]
return total
Here, if the matrix size is n à n, the total number of iterations is n à n = n².
O(2âż): Exponential Time Complexity
Exponential time complexity means the runtime of your program doubles (or grows even faster) with every additional input element.
For n = 10 you would do 1024 iterations. For n = 100 it's 1267650600228229401496703205376!
Exponential algorithms are highly inefficient for large inputs, making them impractical except for very small datasets.
Example: The Fibonacci Sequence (Naive Recursion)
A classic example is calculating the nth Fibonacci number using a naive recursive function. The function calls itself twice for each input, resulting in exponential growth:
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
Here how you can imagine the call stack:

Here, the number of recursive calls grows exponentially as 2âż, making it painfully slow for large n.
Constants in Complexity: Why O(5n) = O(n)
When discussing time complexity, constants donât matter. O(n), O(5n), and O(500n) are all considered O(n) because we care about the growth rate, not the exact runtime.
That said, constants do matter in real life, but for now, weâre focusing on the big picture.
What is space complexity?
Space complexity measures how much memory your program uses as the input grows. There are two main contributors to space complexity:
-
Variables and Data Structures
If you create an array, list, or matrix with n elements, thatâs O(n) space complexity.
def find_positive(arr):
n = len(arr)
# Allocating linear space: O(n)
positive = [arr[i] > 0 for i in range(n)]
-
The Recursive Call Stack
Recursion requires additional memory to keep track of function calls. Each time a function calls itself, the program uses space to store:
- Where to return after the call.
- What arguments were passed.
The deeper the recursion, the more memory it consumes. That why if you do too many recursive calls, you'll get maximum recursion depth exceeded.
Illustration:

O(log n) - Logarithmic space complexity
Could it be? Yes! But unlike time complexity, logarithmic space is rareâbecause we usually allocate memory linearly - O(n) - rather than shrinking it like a logarithm.
However, recursive logarithmic algorithms are an exception! They donât store any variable arrays but instead use memory only for the recursive call stack, which grows logarithmically.
Example: Binary Search & Logarithmic Space
Remember our binary search from before? Itâs not just O(log n) time, itâs also O(log n) spaceâbut only in its recursive form!
def binary_search(array, start, end, target):
if start >= end:
return -1
middle = (start + end) // 2
if array[middle] == target:
return middle
if array[middle] < target:
return binary_search(array, middle + 1, end, target)
return binary_search(array, start, middle, target)
Why O(log n) Space?
Each function call adds a frame to the call stack, storing variables like start
, end
, and middle
.
Since each recursive call splits the problem in half, we only go log(n) levels deep before stopping.
Example: Searching in 100 Numbers
- First call:
search(1, 100)
- Second call:
search(51, 100)
- Third call:
search(76, 100)
- Fourth call:
search(76, 88)
- Fifth call:
search(83, 88)
- Sixth call:
search(86, 88)
- Found it! đ
Total recursive calls: ~log(100) â 6
Each call uses a bit of memory, but once a call finishes, it disappears from the stack. So at most, the stack holds O(log n) calls at a time.
Want a handy reference?
You can download this Time and Space Complexity Poster to keep the concepts fresh:
Why Does Complexity Matter?
You might be thinking: âComputers are so fast these daysâwhy should I care about complexity?â
Great question! And itâs one weâll answer in the next lesson. Iâll teach you how competitive programmers predict runtimes using simple tricks. Itâs a game-changer for solving problems efficiently.
Stay tuned, and Iâll see you next week!