Big O notation is a way to describe how the performance of an algorithm changes as the size of the input grows. It helps us measure both time complexity (how long an algorithm takes to run) and space complexity (how much memory it uses).
We generally focus on the time complexity, as time is often our limiting factor. There will be some algorithms, however, especially when using very large data sets, that space complexity becomes a limiting factor. However, most of the algorithms we will look at won’t worry about those.
Think of Big O as a way to understand how “efficient” an algorithm is, especially as inputs get very large.
Because we are looking at inputs to an algorithm, notice that we won’t worry about the programming language, the hardware you are running on, or other such factors. Algorithms are completely agnostic to those things.
How Do We Measure Big O?
We measure Big O by focusing on the number of operations an algorithm performs relative to the size of its input (often called n
). Here is how:
- Input Size (
n
): The number of elements an algorithm processes.- Example: If sorting a list of 10 numbers,
n = 10
.
We don’t worry about how many elements when listing the Big O size. Thi sis because the number of elements is expected to change.
- Example: If sorting a list of 10 numbers,
- Count Key Operations: Identify the dominant operation (e.g., comparisons, assignments).
- Example: For a loop running
n
times, there are roughlyn
operations. We will not worry about how many steps are within the loop body.
- Example: For a loop running
- Growth Rate: Determine how the number of operations increases as
n
grows. Ignore constants and focus on the general trend.- Example: Doubling the input size might double the number of operations (linear growth) or quadruple them (quadratic growth).
Why Is Big O Important?
- Scalability: It tells us how an algorithm will perform with very large inputs.
- Example: A social media platform needs to know if its search algorithm will handle millions of users efficiently.
- Comparison: It helps compare algorithms to choose the best one for the job.
- Example: Bubble Sort has worse performance than Merge Sort for large datasets.
- Bottlenecks: It reveals inefficiencies in your code.
Often test, with small data sets may not show a potential bottleneck. I’ve run multiple algorithms with 10,000 input elements of different Big O types. The two algorithms showed no difference in speed to the millisecond. However, when 10,000,000 elements were given to them, there was a huge difference in the time it took to perform the action.
This is why it is so important to understand the difference between algorithm types.
What Doesn’t Affect Big O?
- Constants and Small Inputs: Big O ignores constant factors because they don’t affect growth rates for large inputs.
- Example: An algorithm taking
2n
steps is stillO(n)
. Likewise, you won’t see a O(n+5) as that is a constant.
- Example: An algorithm taking
- Minor Optimizations: Micro-optimizations like removing one line of code don’t change the growth rate. We are not looking at lines of code, or op a specific number of operations. We’re considering how many times does a block of code run. Whether that is 5 operations or 20 operations does not matter to us.
- Specific Hardware or Programming Language: Big O is about the algorithm, not the system it runs on.
Remember, algorithms are platform agnostic. While some languages, hardware, will improve performance, a better algorithm always is better, often negating any improvements in hardware.
People will often argue that a faster processor, more RAM, etc, will handle any problem. This is true to an extent. For example, moving from a 1GHz machine to a 3GHz machine may appear to have a large impact on your performance. However, no matter how much money you have to throw at it, you cannot have a 10GHz machine, at least right now.
How do you Measure Big O Sizes
There are a couple of ways to determine your Big O size for your algorithm.
First, you can measure it against a variety of inputs and plot your time. This will show you visual representation, which you can match to common Big O sizes. However, this is not always accurate, and can cause its own issues, especially if you don’t have large sets of data to test, and/or you are dealing with a time complex operation.
You can go through a mathematical process to find the Big O. Working through this mathematically is possible, but it takes both time, and skill to do. In many cases, getting something working is more important, and then we only look to measure if we find we need to optimize. In my years of experience, I’ve never sat down with a team to discuss Big O. Instead, we looked at bottleneck graphs, resource limitations, and what is on our task list to complete.
Finally, and this I find to be the most practical, is to look at common Big O sizes (listed below), and see what type of algorithm I’m using. I know a lot of divide and conquer algorithms are either Logarithmic, or a variation like O(n log n). I know nesting loops often creates a Quadratic type of algorithm, etc.
Therefore, I can get a good estimate quickly, if I know the types/styles of algorithms that I’m using. This is why we study this information. It is often good enough for a software engineer.
Common Big O Sizes
Here are some common time complexities, ranked from fastest to slowest:
- O(1) – Constant Time: The algorithm always takes the same amount of time, regardless of input size.
- Example: Accessing an element in an array by its index.
- O(log n) – Logarithmic Time: The algorithm’s time grows very slowly as input size increases.
- Example: Binary Search.
- O(n) – Linear Time: The time grows directly in proportion to the input size.
- Example: Iterating through a list.
- O(n log n): Common for efficient sorting algorithms.
- Example: Merge Sort, Quick Sort.
- O(n²) – Quadratic Time: Time grows as the square of the input size. This is common in nested loops.
- Example: Bubble Sort, Selection Sort.
- O(2ⁿ) – Exponential Time: Time doubles with each additional input. This is very slow and often impractical.
- Example: Solving the Traveling Salesman Problem using brute force.
- O(n!) – Factorial Time: Grows even faster than exponential time. Often involves generating all permutations of a set.
- Example: Solving puzzles like the N-Queens problem.
Aim for lower complexity and better efficiency. However, some algorithms just cannot be rewritten to be more efficient unfortunately. This is a challenge of Computer Science, especially when working with the theoretical side of Computer Science.
What is Big O Notation? was originally found on Access 2 Learn
3 Comments
Comments are closed.