A linear O(log n) algorithm is a type of algorithm that has a time complexity of O(log n), where ‘n’ represents the number of elements in the input data.
This means that the running time of the algorithm increases logarithmically with the size of the input.
In simpler terms, if you double the size of the input, the algorithm will take roughly the same amount of time, plus a small constant factor. This is in contrast to linear O(n) algorithms, which take twice as long when the input size doubles.
You will find that linear algorithms often start off a little faster. However, once you get to real-world sizes, logarithmic algorithms are much faster.
Consider a Binary Search, vs a Brute Force Search. For 10 data points, a brute force search O(n) takes 10 steps at most, and averages 5. For a Binary Search, it takes an average of 3 steps. However, at 100 data points, a brute force search can take up to 100, with 50 calculations. Where with a binary search, it will take 7 operations on average.
The extra time to write a binary search is made up in run time savings, if your application is run either frequently, or with large data sets.
Characteristics of O(log n) Algorithms
- The algorithm reduces the problem size by a constant factor in each iteration.
- The algorithm uses a divide-and-conquer approach to solve the problem.
- The algorithm has a recursive structure, with each recursive call reducing the problem size.
Real-World Examples of Linear O(log n) Algorithms
- Binary Search: Binary search is a classic example of an O(log n) algorithm. Given a sorted array, binary search finds an element by repeatedly dividing the search space in half. If the element is found, the algorithm returns its index; otherwise, it returns -1.
- Finding an Element in a Balanced Binary Search Tree: A balanced binary search tree is a data structure that keeps the height of the tree relatively small by balancing the left and right subtrees. Finding an element in a balanced binary search tree takes O(log n) time, where ‘n’ is the number of nodes in the tree.
- Exponentiation by Squaring: Exponentiation by squaring is an algorithm for computing the value of a number raised to a power. The algorithm uses a divide-and-conquer approach to reduce the problem size, resulting in a time complexity of O(log n).
- Merge Sort: Merge sort is a sorting algorithm that uses a divide-and-conquer approach to sort an array. While the overall time complexity of merge sort is O(n log n), the merge step has a time complexity of O(log n).
Exponentiation by Squaring Code Example
Exponentiation by squaring is an efficient algorithm for computing the value of a number raised to a power. This works in part by understanding how code is optimized to run bit-shift operations, which allows for very fast manipulations of integer numbers.
Here’s an example implementation in Python:
def exponentiation_by_squaring(base, exponent):
"""
Compute the value of base raised to the power of exponent using exponentiation by squaring.
Args:
base (int): The base number.
exponent (int): The exponent.
Returns:
int: The result of base raised to the power of exponent.
"""
if exponent == 0:
return 1
elif exponent < 0:
return 1 / exponentiation_by_squaring(base, -exponent)
elif exponent % 2 == 0:
half_exponent = exponent // 2
half_result = exponentiation_by_squaring(base, half_exponent)
return half_result * half_result
else:
half_exponent = (exponent - 1) // 2
half_result = exponentiation_by_squaring(base, half_exponent)
return base * half_result * half_result
# Example usage:
base = 2
exponent = 10
result = exponentiation_by_squaring(base, exponent)
print(f"{base}^{exponent} = {result}")
This implementation uses a recursive approach to compute the result. The basic idea is to:
- Handle the base cases: if the exponent is 0, return 1; if the exponent is negative, compute the reciprocal.
- If the exponent is even, compute the result for half the exponent and square it.
- If the exponent is odd, compute the result for half the exponent minus one, square it, and multiply by the base.
This algorithm has a time complexity of O(log n), where n is the exponent, making it much more efficient than the naive approach of repeated multiplication for large exponents.
Logarithmic O(log n) Algorithms was originally found on Access 2 Learn
3 Comments
Comments are closed.