Time and space complexity poster

What is time and space complexity, exactly?

Hi there! Welcome—I'm so glad you're here. My name's Ava, and I’ll be your guide through Algorithms and Data Structures. A big thanks to everyone who joined our Linkedin Group. I’m thrilled we’re starting this journey together!

Today, we’re diving into time and space complexity—an essential topic for understanding how efficient your programs are. Let’s break it down step by step.

What is n?

You’ve probably heard terms like “O(n),” “n log n,” or “O(n²)” being tossed around. But what does this mysterious n mean?

When writing programs, n usually refers to the size of the input. It could be:

  • The length of an array or string.
  • The number of elements in a data structure.
  • Any input size your program processes.

For example, if we have a function that multiplies all elements in an array, the size of the array is n:

# Length of the array is "n"
def multiply(arr): 
    result = 1
    for a in arr:
        result *= a
    return result

Here, the time it takes to run the function depends on n, the size of the input array.

What is time complexity?

Time complexity measures how the runtime of your program changes as the size of the input (n) grows. Think of it as a way to estimate efficiency. Let’s explore a few common types:

O(1): Constant Time Complexity

The best-case scenario! Here, the runtime is independent of the input size. Whether the input has 10 elements or 10 billion, it takes the same amount of time.

Examples of O(1) Operations:

  • Accessing an element in an array/list by index
  • Getting the length of a list in Python
  • Pushing/Popping from the end of a list
  • Dictionary (HashMap) lookups and inserts (on average)
  • Set operations: adding, removing, checking membership
  • Modulo (Remainder) and basic arithmetic operations
def check_cache(cache, target):
    # Searching in a hashtable is amortized O(1)
    if target in cache:
        # Retrieving element from a hashtable is amortized O(1)
        return cache[target]
    return None

O(log n): Logarithmic time complexity

Logarithms grow sloooow. Like, "watching-paint-dry" slow. But that’s a good thing when it comes to algorithms!

Here’s the deal:

  • log(1024) ≈ 10
  • Square the input? log(1000²) = 2 × log(1000) ≈ 20
  • Raise it to the fifth power? log(1000⁾) = 5 × log(1000) ≈ 50

The key takeaway? Even if the input size is gigantic, the number of steps only creeps up. Logarithmic algorithms are like efficient librarians who don’t waste time checking every book when looking for yours.

Example: Binary Search – The Game of Halves

Let’s say you’re searching for 87 in a sorted list of 100 natural numbers. Instead of going one by one like a caveman, you take the "divide and conquer" approach:

  1. Check the middle (50). Too low!
  2. Check the middle of the upper half (75). Still too low.
  3. Middle of 76-100 is 88. Too high!
  4. Middle of 76-88 is 82. Too low.
  5. Middle of 83-88 is 85. Still low.
  6. Middle of 86-88 is 87. Boom! Found it in just 6 steps.
def binary_search(array, start, end, target):
    if start >= end:
        return -1
    middle = (start + end) // 2
    if array[middle] == target:
        return middle
    if array[middle] < target:
        return binary_search(array, middle + 1, end, target)
    return binary_search(array, start, middle, target)

Instead of checking 100 numbers, we checked only 6. That’s the magic of O(log n)—the bigger the list, the more it saves you time. Each step cuts the remaining work in half.

O(n): Linear Time Complexity

With linear complexity, the runtime grows in direct proportion to the input size. If the input size doubles, the runtime roughly doubles.

Example 1: Summing an Array

If you’re adding up all elements in an array, your program needs to visit each element one by one. This results in O(n) time complexity:

def sum_array(arr):
    total = 0
     # Loop runs once for each element in arr
    for num in arr: 
        total += num
    return total

Example 2: Searching in an Unsorted Array

Suppose you’re searching for a specific value in an unsorted array. In the worst case, you’ll have to check every element until you find it (or confirm it’s not there).

def search(arr, target):
 # Loop runs n times if the target isn’t found
    for num in arr:
        if num == target:
            return True
    return False

O(n log n): "An Log An" – A Little Extra Work

You already know that logarithmic time is fast—like a ninja slicing a problem in half at every step. Now, O(n log n) just means we take that speed and multiply it by n. It’s still much better than O(n²), but a bit more work than a simple O(n).

How Fast (or Slow) Is It?

For n = 10⁚, we get:

nlog⁡n = 10⁹ × log⁡10⁹

Since log(10⁹) = 3 log(10³) ≈ 30, we get:

10⁹ × log⁡10⁹ = 30 × 10⁹

That’s 30 billion steps—not exactly "instant," but way better than O(n²), which would be 10¹⁸ steps (yikes!).

Where Do We See O(n log n)?

This complexity is common in efficient sorting algorithms, including:

  1. Merge Sort – Divide, conquer, and merge.
  2. Heap Sort – Uses a priority queue (fancy way of saying "organized chaos").
  3. Quick Sort – When it behaves (i.e., in its average case).

Example: Sort

Good news: You don’t need to write your own O(n log n) sorting algorithm! Python's built-in sort function already uses Timsort, which is O(n log n) in most cases.

# Using Timsort under the hood!
sorted(array)

So next time you need to sort a list fast, just let the language do the heavy lifting—because O(n log n) is as good as it gets for general-purpose sorting!

O(n²): Quadratic Time Complexity

Quadratic complexity arises when your program performs nested operations— for example, when one loop is inside another. If the input size increases by n, the number of operations increases by n².

Example: Processing a Matrix

Imagine summing all the elements in a 2D matrix.

def sum_matrix(matrix):
    total = 0
    # Outer loop runs n times (rows)
    for i in range(len(matrix)):
    # Inner loop runs n times (columns)  
        for j in range(len(matrix[0])):  
            total += matrix[i][j]
    return total

Here, if the matrix size is n × n, the total number of iterations is n × n = n².

O(2ⁿ): Exponential Time Complexity

Exponential time complexity means the runtime of your program doubles (or grows even faster) with every additional input element.

For n = 10 you would do 1024 iterations. For n = 100 it's 1267650600228229401496703205376!

Exponential algorithms are highly inefficient for large inputs, making them impractical except for very small datasets.

Example: The Fibonacci Sequence (Naive Recursion)

A classic example is calculating the nth Fibonacci number using a naive recursive function. The function calls itself twice for each input, resulting in exponential growth:

def fibonacci(n):
    if n <= 1:
        return n
    return fibonacci(n - 1) + fibonacci(n - 2)

Here how you can imagine the call stack:

Depth of stack example

Here, the number of recursive calls grows exponentially as 2ⁿ, making it painfully slow for large n.

Constants in Complexity: Why O(5n) = O(n)

When discussing time complexity, constants don’t matter. O(n), O(5n), and O(500n) are all considered O(n) because we care about the growth rate, not the exact runtime.

That said, constants do matter in real life, but for now, we’re focusing on the big picture.

What is space complexity?

Space complexity measures how much memory your program uses as the input grows. There are two main contributors to space complexity:

  1. Variables and Data Structures

    If you create an array, list, or matrix with n elements, that’s O(n) space complexity.

def find_positive(arr):
    n = len(arr)
    # Allocating linear space: O(n)
    positive = [arr[i] > 0 for i in range(n)]
  1. The Recursive Call Stack

    Recursion requires additional memory to keep track of function calls. Each time a function calls itself, the program uses space to store:

    • Where to return after the call.
    • What arguments were passed.

The deeper the recursion, the more memory it consumes. That why if you do too many recursive calls, you'll get maximum recursion depth exceeded.

Illustration:

Depth of stack example

O(log n) - Logarithmic space complexity

Could it be? Yes! But unlike time complexity, logarithmic space is rare—because we usually allocate memory linearly - O(n) - rather than shrinking it like a logarithm.

However, recursive logarithmic algorithms are an exception! They don’t store any variable arrays but instead use memory only for the recursive call stack, which grows logarithmically.

Example: Binary Search & Logarithmic Space

Remember our binary search from before? It’s not just O(log n) time, it’s also O(log n) space—but only in its recursive form!

def binary_search(array, start, end, target):
    if start >= end:
        return -1
    middle = (start + end) // 2
    if array[middle] == target:
        return middle
    if array[middle] < target:
        return binary_search(array, middle + 1, end, target)
    return binary_search(array, start, middle, target)

Why O(log n) Space?

Each function call adds a frame to the call stack, storing variables like start, end, and middle. Since each recursive call splits the problem in half, we only go log(n) levels deep before stopping.

Example: Searching in 100 Numbers

  • First call: search(1, 100)
  • Second call: search(51, 100)
  • Third call: search(76, 100)
  • Fourth call: search(76, 88)
  • Fifth call: search(83, 88)
  • Sixth call: search(86, 88)
  • Found it! 🎉

Total recursive calls: ~log(100) ≈ 6

Each call uses a bit of memory, but once a call finishes, it disappears from the stack. So at most, the stack holds O(log n) calls at a time.

Want a handy reference?

You can download this Time and Space Complexity Poster to keep the concepts fresh:

Download original

Why Does Complexity Matter?

You might be thinking: “Computers are so fast these days—why should I care about complexity?”

Great question! And it’s one we’ll answer in the next lesson. I’ll teach you how competitive programmers predict runtimes using simple tricks. It’s a game-changer for solving problems efficiently.

Stay tuned, and I’ll see you next week!

If you want to refresh concepts quickly, watch a video: