Generally speaking, there are different classifications of algorithm strategies. Some are very common, other’s less so. This does not mean that one is necessarily better or worse than another. Nor do these dictate how fast an algorithm might be. We’ll discuss that more when we talk about Big O Notation.
As with all algorithms, these are language and platform agnostic. While some languages might have tools to help make solving some types of problems easier, these algorithms can be applied in nearly any language.
Let’s break down these general types of algorithms, as well as giving you some examples.
Brute Force
Brute force algorithms systematically try every possible solution to solve a problem. While they don’t optimize for efficiency, they can guarantee an answer if one exists.
Where You See It:
- Password Cracking: Trying all possible password combinations.
- Traveling Salesman Problem: Checking every route to find the shortest one.
- Pattern Matching: Scanning through a string to find a specific pattern.
- Linear Search: Searching through a list.
Example:
Finding the largest product of two numbers in a list by testing every pair.
for i in range(len(numbers)):
for j in range(i + 1, len(numbers)):
max_product = max(max_product, numbers[i] * numbers[j])
Weaknesses:
- Inefficient for large input sizes.
- Computationally expensive due to exhaustive exploration.
With these problems, you might ask why do they exist?
Strengths:
- Simple to implement – especially by a new programmer
- Guarantees a solution (if computationally feasible).
- May be OK for small data sets. (The definition of small varies based upon other factors).
Greedy Algorithms
Greedy algorithms make the best local decision at each step, hoping it leads to the global optimum. They don’t revisit decisions made earlier.
Where You See It:
- Dijkstra’s Algorithm: Finding the shortest path in a graph.
- Huffman Coding: Compressing data by building optimal prefix trees.
- Knapsack Problem (Greedy Variant): Selecting items to maximize value while keeping within a weight limit.
Example:
In the Coin Change Problem, if you’re asked for the fewest coins to make change, a greedy algorithm takes the largest denomination first.
#coins is a list of coin values which you have to use
def coin_change(coins, amount):
#sort in descending order, so you use the large coins first
coins.sort(reverse=True)
#initialize the coin counter
total_coins = 0
for coin in coins:
total_coins += amount // coin
amount %= coin
return total_coins
Strengths:
- Simple and fast. – This means it is often easy to write for new programmers and/or it can be written quickly.
- Works well when the problem has the “greedy-choice property” (local choices lead to global optimality).
- For small data sets, speed usually isn’t a factor, but with larger data sets, it might not be optimal.
Weaknesses:
- Doesn’t guarantee the optimal solution for all problems.
- Example: Greedy algorithms fail in certain cases of the knapsack problem.
Divide and Conquer
When a problem is too big to solve, you divide the problem into smaller subproblems. From here you can solve them recursively (usually), or break them down further.
You will then combine their solutions. When you a finished, the entire problem is solved.
Where You See It:
- Various Sorting Algorithm: Dividing a list into halves and sorting them. We’ll look at them merge sort, quick sort, and others.
- Binary Search: Dividing a sorted list to find a target.
Merge Sort Example:
def merge_sort(arr):
# If the array has more than one element, we need to split it
if len(arr) > 1:
# Find the middle point and split the array into two halves
mid = len(arr) // 2
left_half = arr[:mid]
right_half = arr[mid:]
# Recursively sort both halves
merge_sort(left_half)
merge_sort(right_half)
# Initialize indices for left half (i), right half (j), and merged array (k)
i = j = k = 0
# Merge the two halves back together
while i < len(left_half) and j < len(right_half):
if left_half[i] < right_half[j]:
arr[k] = left_half[i]
i += 1
else:
arr[k] = right_half[j]
j += 1
k += 1
# If there are any remaining elements in the left half, add them
while i < len(left_half):
arr[k] = left_half[i]
i += 1
k += 1
# If there are any remaining elements in the right half, add them
while j < len(right_half):
arr[k] = right_half[j]
j += 1
k += 1
Strengths:
- Efficient for large datasets.
- Reduces problem size at every step.
Weaknesses:
- May use more memory for recursive calls.
- Some programmers have a hard time with recursion
- Harder to implement than simpler methods.
Dynamic Programming (DP)
Dynamic programming solves problems by breaking them into overlapping subproblems and storing results of subproblems to avoid redundant calculations. This is sometimes called caching the results.
Where You See It:
- Fibonacci Sequence: Using memoization to store results of previous computations.
- Knapsack Problem (DP Variant): Finding the optimal solution by considering all possible combinations of items.
- Longest Common Subsequence (LCS): Finding the longest sequence that appears in the same order in two strings.
Example:
Fibonacci with Memoization:
def fibonacci(n, cache={}):
"""
Calculate the nth Fibonacci number using memoization.
Parameters:
n (int): The position of the Fibonacci sequence to calculate.
cache (dict): A dictionary to store previously calculated Fibonacci numbers.
Returns:
int: The nth Fibonacci number.
"""
# Base cases: Return n if n is 0 or 1
if n == 0 or n == 1:
return n
# If the value is already in the cache, return it
if n in cache:
return cache[n]
# Calculate the nth Fibonacci number recursively and store it in the cache
cache[n] = fibonacci(n - 1, cache) + fibonacci(n - 2, cache)
return cache[n]
Sometimes a language will help you, such as instead of this form of caching, you can use something like lru_cache
in Python.
from functools import lru_cache
@lru_cache(maxsize=None)
def fibonacci(n):
"""
Calculate the nth Fibonacci number using caching (memoization).
Parameters:
n (int): The position of the Fibonacci sequence to calculate.
Returns:
int: The nth Fibonacci number.
"""
# Base cases: Return n if n is 0 or 1
if n == 0 or n == 1:
return n
# Recursive calculation using cached values
return fibonacci(n - 1) + fibonacci(n - 2)
While using a built-in tool makes your code faster to write, it is the same technique, and the first can be replicated in other languages easily.
Strengths:
- Great for optimization problems.
- Avoids redundant computations.
Weaknesses:
- Requires extra memory for storing results.
- Needs careful planning to define the recurrence relation.
Backtracking
Backtracking systematically explores potential solutions, abandoning paths that fail to meet constraints (i.e., “backtracking” to try a different path).
This differs from a Greedy Algorithm which never backtracks to revisit, always assuming that the best option initially will always provide the best option overall.
Where You See It:
- N-Queens Problem: Placing queens on a chessboard such that no two threaten each other.
- Sudoku Solver: Filling the grid while checking constraints.
- Maze Solving: Exploring paths until the exit is found.
Example:
N-Queens Problem:
def solve_n_queens(board, col):
if col >= len(board):
return True
for row in range(len(board)):
if is_safe(board, row, col):
board[row][col] = 1
if solve_n_queens(board, col + 1):
return True
board[row][col] = 0
return False
Strengths:
- Finds all possible solutions.
- Useful for constraint satisfaction problems.
Weaknesses:
- Can be slow for large search spaces.
- May require optimization with pruning techniques.
- For large inputs, you can run into memory issues, or having it time out if you don’t prune.
When we want to prune, this can lead us to another type of algorithm, often referred to as the branch and bound.
Branch and Bound
Similar to backtracking but it uses “bounding” to discard large portions of the search space that cannot contain the optimal solution.
Where You See It:
- Traveling Salesman Problem: Pruning subpaths that exceed the current shortest path.
- Knapsack Problem (Optimized): Bounding solutions that can’t exceed the current best value.
A good example of both of these is finding a solution, then looking for better solutions. Once a solution isn’t found to be better than a known existing solution, i.e. once it is determined to take longer/go farther, then you backtrack. You also may mark a node as not being worth following, so you don’t follow down that given path in the future, thus improving the performance.
Strengths:
- More efficient than plain backtracking.
- Guarantees the optimal solution.
Weaknesses:
- Computationally expensive for large problems.
- Requires careful implementation of bounds.
Randomized Algorithms
Introduces randomness into the decision-making process to improve performance or simplicity.
Where You See It:
- Quicksort: Using a random pivot to avoid worst-case scenarios.
- Monte Carlo Methods: Estimating probabilities or solving problems like approximating π.
Example:
Randomized Quicksort:
import random
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = random.choice(arr)
left = [x for x in arr if x < pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + [pivot] + quicksort(right)
Strengths:
- Simple to implement.
- Often faster in practice.
Weaknesses:
- Can have unpredictable performance.
- Might not guarantee correctness (e.g., Las Vegas algorithms).
Conclusion
Each algorithm type has strengths and weaknesses and is suited to specific problem types. Understanding their characteristics helps you choose the right approach for the job. If you’re unsure, start with simpler methods (like brute force) and refine your approach based on the problem’s requirements!
Different Types of Algorithms was originally found on Access 2 Learn
5 Comments
Comments are closed.