Dynamic Programming - Tackling Complex Problems

There are some optimization problems that are often practical to solve if the problem is small but become extremely time consuming as the problem grows larger.

Dynamic Programming gets around this issue with the following

We will look at 3 problems that illustrate the technique along example code for the first (and easiest). The procedure is quite subtle and varies somewhat with each problem but once you grasp the ideas, Dynamic Programming is not hard to use.

The Rod Cutting Problem

Imagine a factory that produces 10 foot (30 cm) lengths of rod which may be cut into shorter lengths that are then sold. The lengths are always a whole number of feet, from one foot to ten.

The demand for the different lengths varies and so does the price. Perhaps more popular lengths command a higher price per foot. The price scale is as follows:

Length of rod in feet:    1 2 3 4  5  6  7  8  9 10
Price of rod in dollars:  1,5,8,9,10,17,17,20,24,30

The problem is to a cut a rod into an optimum set of smaller lengths to maximize the price.

Our approach to the problem is as follows. The optimum solution will contain a set of lengths. We can start from the left and try a possible cut from 1 to n (length of the rod asking the question: does this cut leave the left side as one of the lengths in the optimal set. To find out, apply the same procdure to the right hand side. Eventually, the optimal set of cuts can be found.

As an example, let's optimize the cutting of a rod of 3 feet. We can leave it at 3 feet getting a value of $8, or cut into 2 pieces (1-2 or 2-1) for a value of 6$, or cut it into 3 pieces which gives us only $3.

Here is our first program. The code is quite short:

01 # rod1.py
02 #
03 # feet     1 2 3 4  5  6  7  8  9 10
04 price = (0,1,5,8,9,10,17,17,20,24,30)
05
06 def rod1(feet, indent=0) :
07     if feet < 1 : return 0
08     print " "*indent, "Entering with length %d" % feet
09     best = 0
10     for cut in range(1,feet+1) :
11         best = max(best, price[cut]+rod1(feet-cut,indent+1))
12     return best
13
14 import sys
15 feet = int(sys.argv[1])
16 bestPrice = rod1(feet)
17 print "\n%d feet. Best price $%d" % (feet, bestPrice)

Running the program:

$ python rod1.py 3
 Entering with length 3
  Entering with length 2
   Entering with length 1
  Entering with length 1
3 feet. Best price $8

The "Entering" message from line 8 let's us track the recursion going on in line 11. The sub-computations are adding the left side of the cut with the optimum of the remaining length:

price[cut] + rod1(feet-cut,indent+1)

The fun starts with bigger rods. Each additional foot in the length doubles the number of computations. Look at the following in Linux:

$ python rod1.py 9 |grep Enter |wc
   256    1024    7168

This means the output 256 lines long. For those not familiar with Unix, we're running the program, filtering the output for just the lines containing the word "Enter" and then counting them. If we optimally cut a 10-foor rod:

$ python rod1.py 10 |grep Enter |wc
   512    2048   14593

we get, unsurprisingly, twice as many. But we are recalculating the same subproblems again and again. The optimum solution for a length of 4 feet is calculated here 32 times:

$ python rod1.py 10 |grep "length 4" |wc
   32     128     880

Making it Better

Instead of repeating calculations, save and reuse the results. Here is a better version:

01 # rod2.py
02 #
03 # feet     1 2 3 4  5  6  7  8  9 10
04 price = (0,1,5,8,9,10,17,17,20,24,30)
05
06 stash = {}
07
08 def rod2(feet, indent=0) :
09     stashed = stash.get(feet)
10     if stashed : return stashed
11
12     if feet < 1 : return 0
13     print " "*indent, "Entering with length %d" % feet
14     best = 0
15     for cut in range(1,feet+1) :
16         best = max(best, price[cut]+rod2(feet-cut,indent+1))
17     stash[feet] = best
18     return best

In line 6 we create a dictionary to store solutions. In lines 9 and 10 we just use the lookup instead of repeating a calculation. And if we do calculate, we store the answer at line 17.

Here is what the output looks like now:

$ python rod2.py 9
 Entering with length 9
  Entering with length 8
   Entering with length 7
    Entering with length 6
     Entering with length 5
      Entering with length 4
       Entering with length 3
        Entering with length 2
         Entering with length 1

9 feet. Best price $25

But We're Still missing something. We get the optimum price but we also need to save how to get there. We can do that by saving (and returning) both the optimum price (bestP) and the cut (bestC) that starts us on our way:

01 # rod3.py
02 #
03 # feet     1 2 3 4  5  6  7  8  9 10
04 price = (0,1,5,8,9,10,17,17,20,24,30)
05
06 stash = {}
07
08 def rod3(feet, indent=0) :
09     stashed = stash.get(feet)
10     if stashed : return stashed[0]  # just the price
11
12     if feet < 1 : return 0
13     print " "*indent, "Entering with length %d" % feet
14     bestPrice, bestCut = (0,0)  # (price, cutpoint)
15     for cut in range(1,feet+1) :
16         newPrice = (price[cut]+rod3(feet-cut,indent+1))
17         if newPrice > bestPrice :
18             bestPrice = newPrice
19             bestCut = cut
20     stash[feet] = (bestPrice, bestCut)
21     return bestPrice
22
23 import sys
24 feet = int(sys.argv[1])
25 bestPrice = rod3(feet)
26 print "\n%d feet. Best price $%d lengths are " % (feet, bestPrice),
27
28 # - now find the order of cutting for optimum
29 rest = feet
30 while rest :
31     _, cut = stash[rest]
32     print cut,
33     rest -= cut
34 print

Now the stash contains tuples for entry in the form (price,cutpoint). Using the max function is somewhat problematic, so lines 16-19 have replaced the earlier line 16. Lines 28-34 reconstruct the series of cuts leading to the optimal solution.

Here is a small run:

$ python rod3.py 9
9 feet. Best price $25 lengths are  3 6

The code for these programs are in dynamic.zip

The Minimum Path Problem

Take a look at the following triangle. The problem is to start at the top and find the path to the bottom with the least cost. The cost at each step is indicated by the number there and the total cost is the sum of costs for each step visited. At each step you may choose to go down either to the right or to the left.

images/s0.png

There are seven rows in the triangle, so six steps will always take you to the bottom. The number of different paths is 64 (Why?). And so each path can be represented by a 6 digit binary number where each bit simply indicates whether we go to the right or to the left from our current location. We could choose a convention that '1' indicates a right branch and '0' indicates a left branch.

Here are two paths shown in green for the triangle above.

images/s1.png images/s2.png

For the path on the left the binary number is 111010. On the right 100111.

One perfectly reasonable approach to solve the problem of finding the least cost path might be to simply generate the 64 numbers (from binary 000000 to 111111) and traverse each corresponding path totaling the cost. At the end select the path the minimum cost and you have the solution in essentially 64 steps.

But with 14 rows instead of 7, there are 8192 steps. 100 rows requires a whooping 633825300114114700748351602688 steps. You could wait a long time for the answer even with the fastest computer.

It's not hard to understand why. Each time we add a row the number of paths doubles. Each of the former paths is replaced by two new ones. One with the last step to the right and the other with the last step to the left.

Another observation gives some insight to the problem. Look at the '2' in the middle of the 5th row. If we look at the paths (and their costs) from this point to the bottom we can see that there are four. (2+8+6, 2+8+2, 2+2+2, and 2+2+1)

We can also see that at least 6 paths from the top path through this '2'. And each time that happens the path to the bottom is recalculated. And the same thing is happening all throughout the triangle. The same calculations are happening again and again.

Dynamic Solution for Minimum Path

Dividing our problem into subproblems that are easier to solve can be done most easily by working from the bottom up to the top. All squares in the bottom row each forms a subproblem of just one row and the optimum (only) cost is the number in the square.

Going up one row the situation is only slightly more complex. We now have subproblems of two rows. So starting with the "9" on the far left, we decide whether to proceed left or right to the bottom. Actually, in this case, both are equally good. The the minimum cost is 15 (9+6).

We want to remember the 15. See the bullet list above. So we write 15 in place of the original value (which actually is no longer needed). We have also changed the color of the square to yellow so that we can easily see which squares have been modified.

images/s5.png

Proceeding across the row each square's value is replaced by the minimum path value.

images/s6.png

And once a row complete we just back up the next higher row and repeat the process.

images/s7.png

Eventually, we reach the top the solution written in the top square.

images/s8.png

Now, we know the minimum cost but we've lost the path to get there. There are a couple of ways around this. One would be to write more information at each square, perhaps a tuple containing both the minimum cost from this point and a value indicating whether to proceed left or right. But this is easily inferred. Just choose the one with the lower (new) minimum value. If they are the same choose one at random. Whenever this happens it means there are multiple paths from the top with the same minimum solution.

The Knapsack Problem

Another classic problem in dynamic programming. This hits pretty close to home for me right now. I'm moving to Europe and will take 2 suitcases along, each of which can weigh no more than 50lbs. I have more than 100lbs of stuff I'd like to take but with the weight limit I have to assign values to items and optimize the value of the contents of the suitcases.

The problem is similar to the minimum path problem but the breakdown of the subproblems is different. The suitcase may hold 50lbs when empty but if it is partially full, then there is only some unused weight to use. The subproblems look like the following.

We look at every increment of available weight in the suitcase combined with a set of items (numbered 1 to n) which is optimally loaded. We then see if item n+1 may be added if there is space OR if adding and bumping an earlier item (or items) results in a better (more optimal) packing.

The tricky part is actually choosing what is best to bump. But the subproblems will be solved in an order that readily provides an answer.

We'll work with an example using 4 items and a knapsack that holds a maximum of 10 kilograms. We can view the algorithm with the following. We have two tables, the left one showing the items to load along with their weights and values. The table on the right computes and shows optimal values for each combination of available weight and a set of items from 1 to n where n is the item numbered in the row in the left table. These optimals are sometimes used for later computations as we'll see.

images/ks1.png

Starting with the row for item 1 we scan left to right. Item 1 weighs 5kg so until we have that much available space it can't be added. Then (see pink) we can. So we place a value of 10 (item one's value) into the square and give it the color pink.

Continuing toward the right (see below) nothing changes because we are only working with item one. When the row is finished we start on the next row now with items #1 and #2. From 0kg to 3kg, both items are too heavy, but when the available capacity is 4kg item 2 may be packed.

images/ks2.png

This continues up to a capacity of 9kg when suddenly both items #1 and #2 will fit.

The actual algorithm thinks of this a little differently. When it scans a row it calculates at each column whether to keep the new item or not. Keeping the item necessitates finding the best way to fill the rest of the sack with earlier items. But these "best ways" have already been computed. So where the optimum value pops to 50, it's because keeping item 2 left 5kg still available and the optimal value for 5kg and just item 1 is that very first '10' we loaded.

For columns where the new item is not kept, the optimum value is the one directly above.

images/ks3.png

On to the row for item #3. Until we reach the 6kg column item #3 is simply too big so it is not kept. The value directly above is copied down. Item #3 is also not kept at 6kg since it would leave 0kg available and its value of 30 is not good enough to give up the 40 above. But at 10kg keeping item #3 leaves 4kg available. So we look at the row above in the 4kg column and add that 40 to the 30 for item #3. The 70 is greater than the 50 above so item #3 is kept. We don't know it yet but item #1 has been displaced.

images/ks4.png

Item 4 has both the least weight and most value so we would expect that it is always kept. And it is.

images/ks5.png

Now we will a walk somewhat backwards and see what items were kept to give us a final value of 90. We start at the lower right corner. Above the 90 is a 70 so since the 70 was not simply copied down item #4 was kept. We paint it green.

images/ks6.png

Now with 7kg (10-3) we go one row up (for items #3, #2, #1) in the 7kg column. It appears there was no advantage to keeping item #3 so we paint it yellow. Moving up a row (for items #2 and #1) and still at 7kg we will keep item #2 since the square above has a lower value. That leaves us with 3kg. In the row for item #1 alone 3kg is a no go. So the items kept (in green) are #2 and #4, with 3kg left over.

images/ks7.png

If you have comments or suggestions You can email me at mail me

Copyright © 2014-2018 Chris Meyers

.