Dynamic Programming: An Overview and Examples
Read Time:13 Minute
Views:665

Dynamic Programming: An Overview and Examples

Dynamic programming is a technique for solving complex problems by breaking them down into smaller subproblems and storing the solutions to these subproblems in a table or array. This technique is commonly used in computer science and operations research, and it can be applied to a wide range of problems, including optimization, scheduling, and resource allocation. In this blog post, we will provide an overview of dynamic programming and provide several examples to illustrate how it works in practice.

What is Dynamic Programming?

Dynamic programming is a technique for solving optimization problems by breaking them down into smaller subproblems and storing the solutions to these subproblems in a table or array. This technique is commonly used in computer science and operations research, and it can be applied to a wide range of problems, including scheduling, resource allocation, and optimization.

The key idea behind dynamic programming is to break a complex problem into a series of smaller subproblems, each of which can be solved independently. The solutions to these subproblems are then stored in a table or array and can be used to solve the original problem. This approach can be very efficient, as it avoids the need to resolve the same subproblems over and over again.

To solve a problem using dynamic programming, we must first identify the subproblems that make up the original problem and then find a way to store the solutions to these subproblems in a table or array. We can then use this table or array to solve the original problem by looking up the solutions to the subproblems and combining them in a way that gives us the optimal solution to the original problem.

How Does Dynamic Programming Work?

Dynamic programming works by breaking a complex problem into a series of smaller subproblems, each of which can be solved independently. The solutions to these subproblems are then stored in a table or array and can be used to solve the original problem.

To understand how dynamic programming works, let’s consider the following example:

Example: Fibonacci Numbers

The Fibonacci numbers are a sequence of numbers that are defined as follows:

  • F(0) = 0
  • F(1) = 1
  • F(n) = F(n-1) + F(n-2) for n > 1

For example, the first few Fibonacci numbers are 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, etc.

To compute the nth Fibonacci number using dynamic programming, we can use the following steps:

  1. Identify the subproblems: In this case, the subproblems are the computation of the Fibonacci numbers for smaller values of n.
  2. Create a table or array to store the solutions to the subproblems: We can create a table or array with one row for each value of n, and store the solution to the subproblem F(n) in the nth row.
  3. Initialize the table or array: We can initialize the table or array by setting the first two rows to 0 and 1, respectively.
  4. Solve the subproblems: We can then solve the subproblems by filling in the rest of the table or array, using the recurrence relation F(n) = F(n-1) + F(n-2).
  5. Use the table or array to solve the original problem: Once the table or array is filled in, we can use the solution to the subproblem F(n) to solve the original problem of computing the nth Fibonacci number.

Here is some sample code that demonstrates how dynamic programming can be used to compute the nth Fibonacci number:

def fibonacci(n): # Create a table to store the solutions to the subproblems table = [0] * (n+1)
    # Initialize the table
    table[0] = 0
    table[1] = 1

    # Solve the subproblems
    for i in range(2, n+1):
        table[i] = table[i-1] + table[i-2]

    # Return the solution to the original problem
   return table[n]

Test the function

print(fibonacci(0)) # Output: 0 
print(fibonacci(1)) # Output: 1 
print(fibonacci(5)) # Output: 5 
print(fibonacci(10)) # Output: 55

In this example, we used a table to store the solutions to the subproblems. However, it is also possible to use an array or a dictionary to store the solutions. The choice of data structure will depend on the specific requirements of the problem being solved.

When to Use Dynamic Programming

Dynamic programming is a useful technique for solving optimization problems, but it is not always the best approach. In general, dynamic programming is most effective when the following conditions are met:

  • The problem can be broken down into smaller subproblems that can be solved independently.
  • The solutions to the subproblems can be combined to give the solution to the original problem.
  • The solutions to the subproblems are needed multiple times, and it is more efficient to store them in a table or array rather than recalculate them each time.

Examples of Dynamic Programming

In this section, we will consider several examples of dynamic programming to illustrate how it works in practice.

Example: Knapsack Problem

The knapsack problem is a classic example of a problem that can be solved using dynamic programming. In this problem, we are given a set of items, each with a weight and a value, and a knapsack with a maximum weight capacity. The goal is to select a subset of the items that can be placed in the knapsack such that the total weight is less than or equal to the capacity of the knapsack, and the total value is maximized.

To solve the knapsack problem using dynamic programming, we can use the following steps:

  1. Identify the subproblems: In this case, the subproblems are the selection of items for smaller subsets of the total set of items.
  2. Create a table or array to store the solutions to the subproblems: We can create a two-dimensional table or array with one row for each item and one column for each possible weight. The value in table[i][w] represents the maximum value that can be obtained by selecting a subset of the first i items with a total weight of w or less.
  3. Initialize the table or array: We can initialize the table or array by setting the first row and column to 0. This represents the case where no items are selected and the total weight is 0.
  4. Solve the subproblems: We can then solve the subproblems by filling in the rest of the table or array using the following recurrence relation:
table[i][w] = max(table[i-1][w-weight[i-1]] + value[i-1])

This recurrence relation states that the maximum value that can be obtained by selecting a subset of the first i items with a total weight of w or less is either the maximum value that can be obtained by selecting a subset of the first i-1 items with a total weight of w or less (if the i-th item is not included), or the maximum value that can be obtained by selecting a subset of the first i-1 items with a total weight of w-weight[i-1] or less plus the value of the i-th item (if the i-th item is included).

  1. Use the table or array to solve the original problem: Once the table or array is filled in, we can use the solution to the subproblem table[n][W] to solve the original problem of selecting a subset of the n items with a total weight of W or less which maximizes the total value.

Here is some sample code that demonstrates how dynamic programming can be used to solve the knapsack problem:

def knapsack(items, weight, value, W): # Create a table to store the solutions to the subproblems n = len(items) table = [[0] * (W+1) for _ in range(n+1)]
    # Initialize the table
    for i in range(n+1):
        for w in range(W+1):
            if i == 0 or w == 0:
                table[i][w] = 0

    # Solve the subproblems
    for i in range(1, n+1):
        for w in range(1, W+1):
            if weight[i-1] > w:
                table[i][w] = table[i-1][w]
            else:
                table[i][w] = max(table[i-1][w], table[i-1][w-weight[i-1]] + value[i-1])

    # Return the solution to the original problem
    return table[n][W]

Test the function

items = ["item1", "item2", "item3", "item4"] 
weight = [10, 20, 30, 40] 
value = [50, 60, 70, 80] 
W = 50 

print(knapsack(items, weight, value, W)) # Output: 130

In this example, we used a two-dimensional table to store the solutions to the subproblems. However, it is also possible to use a one-dimensional array or a dictionary to store the solutions. The choice of data structure will depend on the specific requirements of the problem being solved.

Example: Longest Common Subsequence

The longest common subsequence (LCS) problem is another classic example of a problem that can be solved using dynamic programming. In this problem, we are given two strings X and Y, and the goal is to find the longest subsequence that is common to both strings. A subsequence is a sequence of characters that appears in the same order in the original strings, but not necessarily consecutively.

To solve the LCS problem using dynamic programming, we can use the following steps:

  1. Identify the subproblems: In this case, the subproblems are the LCS of the prefixes of the two strings.
  2. Create a table or array to store the solutions to the subproblems: We can create a two-dimensional table or array with one row for each character in string X and one column for each character in string Y. The value in table[i][j] represents the length of the LCS of the prefixes X[:i] and Y[:j].
  3. Initialize the table or array: We can initialize the table or array by setting the first row and column to 0. This represents the case where one or both of the prefixes is empty.
  4. Solve the subproblems: We can then solve the subproblems by filling in the rest of the table or array using the following recurrence relation:
if X[i-1] == Y[j-1]: 
    table[i][j] = table[i-1][j-1] + 1 
else: 
    table[i][j] = max(table[i-1][j], table[i][j-1])

This recurrence relation states that the length of the LCS of the prefixes X[:i] and Y[:j] is either the length of the LCS of the prefixes X[:i-1] and Y[:j-1] plus 1 (if the last characters of the prefixes are the same), or the maximum length of the LCS of the prefixes X[:i-1] and Y[:j] and the length of the LCS of the prefixes X[:i] and Y[:j-1] (if the last characters of the prefixes are different).

  1. Use the table or array to solve the original problem: Once the table or array is filled in, we can use the solution to the subproblem table[m][n] to solve the original problem of finding the length of the LCS of the two strings X and Y, where m and n are the lengths of the strings X and Y, respectively.

Here is some sample code that demonstrates how dynamic programming can be used to solve the LCS problem:

def LCS(X, Y): # Create a table to store the solutions to the subproblems m = len(X) n = len(Y) table = [[0] * (n+1) for _ in range(m+1)]
    # Initialize the table
    for i in range(m+1):
        for j in range(n+1):
            if i == 0 or j == 0:
                table[i][j] = 0

    # Solve the subproblems
    for i in range(1, m+1):
        for j in range(1, n+1):
            if X[i-1] == Y[j-1]:
                table[i][j] = table[i-1][j-1] + 1
            else:
                table[i][j] = max(table[i-1][j], table[i][j-1])

    # Return the solution to the original problem
    return table[m][n]

Test the function

X = "abcbdab" 
Y = "bdcaba" 

print(LCS(X, Y)) # Output: 4

In this example, we used a two-dimensional table to store the solutions to the subproblems. However, it is also possible to use a one-dimensional array or a dictionary to store the solutions. The choice of data structure will depend on the specific requirements of the problem being solved.

Example: Matrix Chain Multiplication

The matrix chain multiplication problem is another example of a problem that can be solved using dynamic programming. In this problem, we are given a chain of n matrices, and the goal is to find the most efficient way to multiply the matrices in the chain. The efficiency of matrix multiplication is determined by the number of scalar multiplications required to perform the multiplication.

To solve the matrix chain multiplication problem using dynamic programming, we can use the following steps:

  1. Identify the subproblems: In this case, the subproblems are the optimal ways to multiply the matrices in smaller subchains of the original chain.
  2. Create a table or array to store the solutions to the subproblems: We can create a two-dimensional table or array with one row for each matrix in the chain and one column for each matrix in the chain. The value in table[i][j] represents the minimum number of scalar multiplications required to multiply the matrices in the subchain from i to j.
  3. Initialize the table or array: We can initialize the table or array by setting the diagonal elements to 0. This represents the case where the subchain consists of only one matrix.
  4. Solve the subproblems: We can then solve the subproblems by filling in the rest of the table or array using the following recurrence relation:
table[i][j] = min(table[i][k] + table[k+1][j] + p[i-1]*p[k]*p[j] for k in range(i, j))

This recurrence relation states that the minimum number of scalar multiplications required to multiply the matrices in the subchain from i to j is the minimum number of scalar multiplications required to multiply the matrices in the subchain from i to k plus the number of scalar multiplications required to multiply the matrices in the subchain from k+1 to j plus the number of scalar multiplications required to multiply the two resulting matrices, for all possible values of k in the range from i to j. The values p[i] represent the dimensions of the matrices in the chain, such that the matrix at index i has dimensions p[i-1] by p[i].

  1. Use the table or array to solve the original problem: Once the table or array is filled in, we can use the solution to the subproblem table[1][n] to solve the original problem of finding the minimum number of scalar multiplications required to multiply the matrices in the chain.

Here is some sample code that demonstrates how dynamic programming can be used to solve the matrix chain multiplication problem:

def matrix_chain_multiplication(p): # Create a table to store the solutions to the subproblems n = len(p) - 1 table = [[0] * n for _ in range(n)]
    # Initialize the table
    for i in range(n):
        table[i][i] = 0

    # Solve the subproblems
    for l in range(2, n+1):
        for i in range(n-l+1):
            j = i + l - 1
            table[i][j] = float("inf")
            for k in range(i, j):
                 table[i][j] = min(table[i][j], table[i][k] + table[k+1][j] + p[i-1]*p[k]*p[j])

    # Return the solution to the original problem
    return table[0][n-1]

Test the function

p = [30, 35, 15, 5, 10, 20, 25] 

print(matrix_chain_multiplication(p)) # Output: 15125

In this example, we used a two-dimensional table to store the solutions to the subproblems. However, it is also possible to use a one-dimensional array or a dictionary to store the solutions. The choice of data structure will depend on the specific requirements of the problem being solved.

Conclusion

In this blog post, we have provided an overview of dynamic programming and several examples of how it can be used to solve optimization problems. Dynamic programming is a powerful technique that can be used to solve a wide variety of problems, but it is not always the most efficient or appropriate approach. It is important to carefully consider the specific characteristics of the problem being solved and choose the most appropriate solution method.

I hope you found this blog post helpful. If you have any questions or comments, please let me know in the comments section below.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

5 Common Mistakes to Avoid When Debugging Code Previous post 5 Common Mistakes to Avoid When Debugging Code (with examples)
The Benefits of Using Behavioral Design Patterns Next post The Benefits of Using Behavioral Design Patterns