Dynamic programming systematically checks all possibility of a given problem.
To solve using dynamic programming, we need to come up with a way to recursively solve a problem. The recursion should be of format of DAG, a bigger sub problem should be depended on a smaller subproblem. Dynamic problem will not work if a smaller sub-problem depends on a bigger sub-problem.
Dynamic programming problem is a favorite choice of interviewers, at it tests understanding of recursion of an interviewee.
Steps to apply dynamic programming for a given problem
0. Check if the problem can be solved using recursion. Think if you have n items, can you solve the problem if you already know solution for n-1 th item. Consider each sub problem an optimal solution. If you know an optimal solution to all reachable lower state, how can you reach higher state with that knowledge.
1. If 0 holds, write the problem in terms of smaller problem. Observe how many variables you need. The number of variables determines the dimension of the array you will need to store solution.
2. Guesses is the number of steps your algorithm needs to combine small solutions to big solution.
3. From variables, determine how many sub problems will be there. Remember, recursive function calling is cheap O(1) time if dynamic programming is used. Guesses * number of sub problems will give the order of the algorithm.
Example:
problem: find fibonacci number
Using recursion:
fibonacci(i) = fibonacci(i-1) + fibonacci(i-2)
number variables is 1 (only i). Which means Number of sub problems = n;
Guesses = O(1) [fibonacci(i-1) + fibonacci(i-2) can be evaluated in O(1) time. Recursion is cheap for dp]
O(algorithm) = O(subproblem_count * guesses_at_each_subproblem) = O(n)
Another Example:
Rod cutting problem. An i lengthed rod can be cut in 2 spots. We need to find the maximum profit we can make out of it.
profit(i) = max(profit(j) + profit(i-j)) for all 0 < j < i-1
here # of variables = 1 => # subproblems = n
#guesses = O(n) [need to generate all points from 1 to i-1 for length i]
O(n * n) = O(n ^ 2)
Yet Another Example:
Knapsack problem
profit(i, capacity) = max(
profit(i-1, capacity), //don't take i th element
profit(i-1, capacity-w[i]) + val[i] //take ith element
)
we see 2 variables here. #subproblems = item_count * bag_capacity
#guesses = O(1)
O(algorithm) = O(item_count * bag_capacity)
this is a pseudo polynomial runtime. if bag_capacity is increases, we will need bigger array to store the intermediate results.
To solve using dynamic programming, we need to come up with a way to recursively solve a problem. The recursion should be of format of DAG, a bigger sub problem should be depended on a smaller subproblem. Dynamic problem will not work if a smaller sub-problem depends on a bigger sub-problem.
Dynamic programming problem is a favorite choice of interviewers, at it tests understanding of recursion of an interviewee.
Steps to apply dynamic programming for a given problem
0. Check if the problem can be solved using recursion. Think if you have n items, can you solve the problem if you already know solution for n-1 th item. Consider each sub problem an optimal solution. If you know an optimal solution to all reachable lower state, how can you reach higher state with that knowledge.
1. If 0 holds, write the problem in terms of smaller problem. Observe how many variables you need. The number of variables determines the dimension of the array you will need to store solution.
2. Guesses is the number of steps your algorithm needs to combine small solutions to big solution.
3. From variables, determine how many sub problems will be there. Remember, recursive function calling is cheap O(1) time if dynamic programming is used. Guesses * number of sub problems will give the order of the algorithm.
Example:
problem: find fibonacci number
Using recursion:
fibonacci(i) = fibonacci(i-1) + fibonacci(i-2)
number variables is 1 (only i). Which means Number of sub problems = n;
Guesses = O(1) [fibonacci(i-1) + fibonacci(i-2) can be evaluated in O(1) time. Recursion is cheap for dp]
O(algorithm) = O(subproblem_count * guesses_at_each_subproblem) = O(n)
Another Example:
Rod cutting problem. An i lengthed rod can be cut in 2 spots. We need to find the maximum profit we can make out of it.
profit(i) = max(profit(j) + profit(i-j)) for all 0 < j < i-1
here # of variables = 1 => # subproblems = n
#guesses = O(n) [need to generate all points from 1 to i-1 for length i]
O(n * n) = O(n ^ 2)
Yet Another Example:
Knapsack problem
profit(i, capacity) = max(
profit(i-1, capacity), //don't take i th element
profit(i-1, capacity-w[i]) + val[i] //take ith element
)
we see 2 variables here. #subproblems = item_count * bag_capacity
#guesses = O(1)
O(algorithm) = O(item_count * bag_capacity)
this is a pseudo polynomial runtime. if bag_capacity is increases, we will need bigger array to store the intermediate results.
No comments:
Post a Comment