Dynamic Programming

Dynamic programming refers to a problem-solving approach, in which we precompute and store simpler, similar subproblems, in order to build up the solution to a complex problem. It is similar to recursion, in which calculating the base cases allows us to inductively determine the final value. This bottom-up approach works well when the new value depends only on previously calculated values.

An important property of a problem that is being solved through dynamic programming is that it should have overlapping subproblems. This is what distinguishes DP from divide and conquer in which storing the simpler values isn't necessary.

To show how powerful the technique can be, here are some of the most famous problems commonly approached through dynamic programming:

  1. Backpack Problem: Given a set of treasures with known values and weights, which of them should you pick to maximize your profit whilst not damaging your backpack which has a fixed capacity?
  2. Egg Dropping: What is the best way to drop \(n\) eggs from an \(m\)-floored building to figure out the lowest height from which the eggs when dropped crack?
  3. Longest Common Subsequence: Given two sequences, which is the longest subsequence common to both of them?
  4. Subset Sum Problem: Given a set and a value \(n,\) is there a subset the sum of whose elements is \(n?\)
  5. Fibonacci Numbers: Is there a better way to compute Fibonacci numbers than plain recursion?

In a contest environment, dynamic programming almost always comes up (and often in a surprising way, no matter how familiar the contestant is with it).

Contents

Motivational Example: Change of Coins

What is the minimum number of coins of values \(v_1,v_2, v_3, \ldots, v_n\) required to amount a total of \(V?\)

You may use a denomination more than once.

Optimal Substructure

The most important aspect of this problem that encourages us to solve this through dynamic programming is that it can be simplified to smaller subproblems.

Let \(f(N)\) represent the minimum number of coins required for a value of \(N\).

Visualize \(f(N)\) as a stack of coins. What is the coin at the top of the stack? It could be any of \(v_1,v_2, v_3, \ldots, v_n\). In case it were \(v_1\), the rest of the stack would amount to \(N-v_1;\) or if it were \(v_2\), the rest of the stack would amount to \(N-v_2\), and so on.

How do we decide which is it? Sure enough, we do not know yet. We need to see which of them minimizes the number of coins required.

Going by the above argument, we could state the problem as follows:

Why is there a +1?
Because the coin at the top of the stack also counts as one coin, and then we can look at the rest.

Overlapping Subproblems

It is easy to see that the subproblems could be overlapping.

For example, if we are trying to make a stack of $11 using $1, $2, and $5, our look-up pattern would be like this:

\[\begin f(11) &= \min \Big( \big\ < 1+f(10),\ 1+ f(9),\ 1 + f(6) \big\>\Big) \\ &= \min \Big ( \big \ < 1+ \min \right )>,\ 1+ f(9),\ 1 + f(6) \big \> \Big ). \end \]

Clearly enough, we'll need to use the value of \(f(9)\) several times.

One of the most important aspects of optimizing our algorithms is that we do not recompute these values. To do this, we compute and store all the values of \(f\) from 1 onwards for potential future use.

Edge Cases

The recursion has to bottom out somewhere, in other words, at a known value from which it can start.

For this problem, we need to take care of two things:

The Algorithm

Let's sum up the ideas and see how we could implement this as an actual algorithm:

1 2 3 4 5 6 7 8 9
# V = the value we want, v=the list of available denomenations def coinsChange(V,v): dpTable = [float("inf")]*(V+1) dpTable[0] = 0 for i in xrange(1,V+1): for vi in v: if (i - vi) >= 0: dpTable[i] = min(dpTable[i],1+dpTable[i-vi]) return dpTable[V] 

Recursion with Memoization

We have claimed that naive recursion is a bad way to solve problems with overlapping subproblems. Why is that? Mainly because of all the recomputations involved.

Another way to avoid this problem is to compute the data first time and store it as we go, in a top-down fashion.

Let's look at how one could potentially solve the previous coin change problem in the memoization way.

1 2 3 4 5 6 7 8 9 10 11 12
def coinsChange(V,v): memo = <> def Change(V): if V in memo: return memo[V] if V == 0: return 0 if V  0: return float("inf") memo[V] = min([1+Change(V-vi) for vi in v]) return memo[V] return Change(V) 

Dynamic Programming vs Recursion with Caching

Dynamic Programming\(\hspace\)Recursion with Caching
Faster if many sub-problems are visited as there is no overhead from recursive calls\(\hspace\) Intuitive approach
The complexity of the program is easier to see\(\hspace\) Computes only those subproblems which are necessary

Bidimensional Dynamic Programming: Example

Submit your answer

There are \(k\) types of brackets each with its own opening bracket and closing bracket. We assume that the first pair is denoted by the numbers 1 and \(k+1,\) the second by 2 and \(k+2,\) and so on. Thus the opening brackets are denoted by \(1, 2, \ldots, k,\) and the corresponding closing brackets are denoted by \(k+1, k+2, \ldots, 2k,\) respectively.

Some sequences with elements from \(1, 2, \ldots, 2k\) form well-bracketed sequences while others don't. A sequence is well-bracketed if we can match or pair up opening brackets of the same type in such a way that the following holds:

  1. Every bracket is paired up.
  2. In each matched pair, the opening bracket occurs before the closing bracket.
  3. For a matched pair, any other matched pair lies either completely between them or outside them.

In this problem, you are given a sequence of brackets of length \(N\): \(B[1], \ldots, B[N]\), where each \(B[i]\) is one of the brackets. You are also given an array of Values: \(V[1],\ldots, V[N] \).

Among all the subsequences in the Values array, such that the corresponding bracket subsequence in the B Array is a well-bracketed sequence, you need to find the maximum sum.

Task: Solve the above problem for this input.

Input Format

One line, which contains \((2\times N + 2)\) space separate integers. The first integer denotes \(N.\) The next integer is \(k.\) The next \(N\) integers are \(V[1]. V[N].\) The last \(N\) integers are \(B[1]. B[N].\)

Constraints

Illustrated Examples

We'll try to solve this problem with the help of a dynamic program, in which the state, or the parameters that describe the problem, consist of two variables.

First, we set up a two-dimensional array dp[start][end] where each entry solves the indicated problem for the part of the sequence between start and end inclusive.

We'll try to think what happens when we run across a new end value, and need to solve the new problem in terms of the previously solved subproblems. Here are all the possibilities:

Can you use these ideas to solve the problem?

Example: Maximum Paths

Very often, dynamic programming helps solve problems that ask us to find the most profitable (or least costly) path in an implicit graph setting. Let us try to illustrate this with an example.

You are supposed to start at the top of a number triangle and chose your passage all the way down by selecting between the numbers below you to the immediate left or right.

Your goal is to maximize the sum of the elements lying in your path.

For example, in the triangle below, the red path maximizes the sum.

To see the optimal substructures and the overlapping subproblems, notice that everytime we make a move from the top to the bottom right or the bottom left, we are still left with smaller number triangle, much like this:

We could break each of the sub-problems in a similar way until we reach an edge-case at the bottom:

In this case, the solution is a + max(b,c) .

A bottom-up dynamic programming solution is to allocate a number triangle that stores the maximum reachable sum if we were to start from that position. It is easy to compute the number triangles from the bottom row onward using the fact that the

Let me demonstrate this principle through the iterations.

Iteration 1:

8 5 9 3