🅾️Big O
Last updated
Last updated
Big O notation 📏 is used to describe the efficiency 🚀 of algorithms 🤖. It represents the upper bound 🔼 on the runtime ⏱️ or space complexity 📦 of an algorithm as the input size 📊 grows large.
Time complexity: ⏱️ It measures how the running time of an algorithm increases with the size of the input.
Space complexity: 🧠 It measures the amount of additional memory space required by an algorithm to solve a problem.
Asymptotic analysis: 📊 Big O notation simplifies the analysis by considering the dominant term with the highest growth rate and ignoring constant factors.
O(1) - Constant time: ⏲️ Algorithms with constant time complexity have a consistent running time regardless of the input size.
O(log n) - Logarithmic time: 🔍 Algorithms with logarithmic time complexity have running times that increase slowly as the input size grows.
O(n) - Linear time: 🚶 Algorithms with linear time complexity have running times directly proportional to the input size.
O(n log n) - Linearithmic time: 📈🔍 Algorithms with linearithmic time complexity have running times that are slightly higher than linear time but still grow efficiently.
O(n^2) - Quadratic time: 🚀 Algorithms with quadratic time complexity have running times that increase with the square of the input size.
O(2^n) - Exponential time: 🚀🚀 Algorithms with exponential time complexity have running times that grow exponentially with the input size.
O(n!) - Factorial time: 🚀🚀🚀 Algorithms with factorial time complexity have running times that grow factorially with the input size.
Best case, worst case, and average case: ⬆️⬇️🔃 Big O notation represents the worst-case scenario of an algorithm. Best and average cases are also important to consider.
Algorithmic efficiency: 💪 Big O notation provides a way to compare the efficiency of different algorithms.
Time complexity analysis: ⏱️🔍 To determine the time complexity of an algorithm, analyze how operations' frequency or execution time scales with the input size.
Space complexity analysis: 🧠🔍 To determine the space complexity of an algorithm, consider the additional memory required and how it scales with the input size.
When analyzing the time complexity of an algorithm using Big O notation, we can apply certain rules to simplify and focus on the most significant aspects. Here are two important rules to keep in mind:
Dropping Constants 🚫
When analyzing the time complexity, we can drop the constant factors.
Example:
O(3n) can be simplified to O(n)
O(2n^2) can be simplified to O(n^2)
The reason for dropping constants is that as the input size grows larger, the impact of constant factors becomes less significant.
Focusing on the Most Significant Term 👀
In many cases, an algorithm's time complexity is determined by the most significant term.
Example:
If an algorithm has a time complexity of O(n^2 + n), we focus on the term with the highest power, which is n^2. Thus, the time complexity is O(n^2).
If an algorithm has a time complexity of O(5n^3 + 10n^2), we focus on the term with the highest power, which is n^3. Thus, the time complexity is O(n^3).
By focusing on the most significant term, we capture the dominant factor that influences the growth rate of the algorithm as the input size increases.
The runtime or space requirements of the algorithm remain constant, regardless of the input size.
Example:
🚀 This loop will always execute exactly 10 times, regardless of the input size.
The runtime or space requirements grow logarithmically as the input size increases.
Example:
🔍 This loop doubles the value of i
in each iteration. It will execute approximately log n
times.
The runtime or space requirements grow linearly with the input size.
Example:
🐌 This loop will execute n
times, directly proportional to the input size.
The runtime or space requirements grow quadratically with the input size.
Example:
🐢🐢 This loop nests another loop inside it, resulting in n
iterations of the inner loop for each iteration of the outer loop.
Let's consider a simple Java method that sums up all the numbers in an array:
In this code, the space complexity is O(1) because the amount of memory used remains constant, regardless of the input size. The only memory required is for the sum
variable, represented by a single 📦 box.
Now, let's consider a different scenario where we create a new array to store the squared values of the input array:
In this code, the space complexity is O(n) because the memory required grows linearly with the input size. The array squared
requires the same amount of memory as the input array nums
, represented by multiple 📦 boxes stacked next to each other.