๐ พ๏ธBig O
Last updated
Last updated
Big O notation ๐ is used to describe the efficiency ๐ of algorithms ๐ค. It represents the upper bound ๐ผ on the runtime โฑ๏ธ or space complexity ๐ฆ of an algorithm as the input size ๐ grows large.
Time complexity: โฑ๏ธ It measures how the running time of an algorithm increases with the size of the input.
Space complexity: ๐ง It measures the amount of additional memory space required by an algorithm to solve a problem.
Asymptotic analysis: ๐ Big O notation simplifies the analysis by considering the dominant term with the highest growth rate and ignoring constant factors.
O(1) - Constant time: โฒ๏ธ Algorithms with constant time complexity have a consistent running time regardless of the input size.
O(log n) - Logarithmic time: ๐ Algorithms with logarithmic time complexity have running times that increase slowly as the input size grows.
O(n) - Linear time: ๐ถ Algorithms with linear time complexity have running times directly proportional to the input size.
O(n log n) - Linearithmic time: ๐๐ Algorithms with linearithmic time complexity have running times that are slightly higher than linear time but still grow efficiently.
O(n^2) - Quadratic time: ๐ Algorithms with quadratic time complexity have running times that increase with the square of the input size.
O(2^n) - Exponential time: ๐๐ Algorithms with exponential time complexity have running times that grow exponentially with the input size.
O(n!) - Factorial time: ๐๐๐ Algorithms with factorial time complexity have running times that grow factorially with the input size.
Best case, worst case, and average case: โฌ๏ธโฌ๏ธ๐ Big O notation represents the worst-case scenario of an algorithm. Best and average cases are also important to consider.
Algorithmic efficiency: ๐ช Big O notation provides a way to compare the efficiency of different algorithms.
Time complexity analysis: โฑ๏ธ๐ To determine the time complexity of an algorithm, analyze how operations' frequency or execution time scales with the input size.
Space complexity analysis: ๐ง ๐ To determine the space complexity of an algorithm, consider the additional memory required and how it scales with the input size.
When analyzing the time complexity of an algorithm using Big O notation, we can apply certain rules to simplify and focus on the most significant aspects. Here are two important rules to keep in mind:
Dropping Constants ๐ซ
When analyzing the time complexity, we can drop the constant factors.
Example:
O(3n) can be simplified to O(n)
O(2n^2) can be simplified to O(n^2)
The reason for dropping constants is that as the input size grows larger, the impact of constant factors becomes less significant.
Focusing on the Most Significant Term ๐
In many cases, an algorithm's time complexity is determined by the most significant term.
Example:
If an algorithm has a time complexity of O(n^2 + n), we focus on the term with the highest power, which is n^2. Thus, the time complexity is O(n^2).
If an algorithm has a time complexity of O(5n^3 + 10n^2), we focus on the term with the highest power, which is n^3. Thus, the time complexity is O(n^3).
By focusing on the most significant term, we capture the dominant factor that influences the growth rate of the algorithm as the input size increases.
The runtime or space requirements of the algorithm remain constant, regardless of the input size.
Example:
๐ This loop will always execute exactly 10 times, regardless of the input size.
The runtime or space requirements grow logarithmically as the input size increases.
Example:
๐ This loop doubles the value of i
in each iteration. It will execute approximately log n
times.
The runtime or space requirements grow linearly with the input size.
Example:
๐ This loop will execute n
times, directly proportional to the input size.
The runtime or space requirements grow quadratically with the input size.
Example:
๐ข๐ข This loop nests another loop inside it, resulting in n
iterations of the inner loop for each iteration of the outer loop.
Let's consider a simple Java method that sums up all the numbers in an array:
In this code, the space complexity is O(1) because the amount of memory used remains constant, regardless of the input size. The only memory required is for the sum
variable, represented by a single ๐ฆ box.
Now, let's consider a different scenario where we create a new array to store the squared values of the input array:
In this code, the space complexity is O(n) because the memory required grows linearly with the input size. The array squared
requires the same amount of memory as the input array nums
, represented by multiple ๐ฆ boxes stacked next to each other.