There are some problems that are often practical to solve if they problem is small but extremely time consuming as the scale of the problem grows larger.

Take, for example the following triangle. The problem is to start at the top and find the path to the bottom with the least cost. The cost of each step is indicated by the number there and the total cost is the sum of costs at each step. At each step you may choose to go down either to the right or to the left.

There are seven rows in the triangle, so six steps will always take you to the bottom. The number of different paths is 64 (Why?). And so each path can be represented by a 6 digit binary number where each bit simply indicates whether we go to the right or to the left from our current location. We could choose a convention that '1' indicates a right branch and '0' indicates a left branch.

Here are two paths shown in green for the triangle above.

For the path on the left the binary number is 111010. On the right 100111.

One perfectly reasonable approach to solve the problem of finding the least cost path might be to simply generate the 64 numbers (from binary 000000 to 111111) and traverse each corresponding path totaling the cost. At the end select the path the minimum cost and you have the solution in essentially 64 steps.

But with 14 rows instead of 7, there are 8192 steps. 100 rows requires a whooping 633825300114114700748351602688 steps. You could wait a long time for the answer even with the fastest computer.

It's not hard to understand why. Each time we add a row the number of paths doubles. Each of the former paths is replaced by two new ones. One with the last step to the right and the other with the last step to the left.

Another observation gives some insight to the problem. Look at the '2' in the middle of the 5th row. If we look at the paths (and their costs) from this point to the bottom we can see that there are four. (2+8+6, 2+8+2, 2+2+2, and 2+2+1)

We can also see that at least 6 paths from the top path through this '2'. And each time that happens the path to the bottom is recalculated. And the same thing is happening all throughout the triangle. The same calculations are happening again and again.

Dynamic Programming gets around issues like this with the following

- The overall problem is divided into subproblems which, being smaller can be solved exponentially faster.
- The solutions to subproblems are combined to form solutions to larger subproblems and eventually the overall problem.
- As solutions to subproblems are computed, the answers are saved and if later the solution to the same subproblem is needed for part of a larger (outer) subproblem (or the whole problem) the answer is retrieved rather than recomputed.

Dividing our problem into subproblems that are easier to solve can be done most easily by working from the bottom up to the top. All squares in the bottom row each forms a subproblem of just one row and the optimum (only) cost is the number in the square.

Going up one row the situation is only slightly more complex. We now have subproblems of two rows. So starting with the "9" on the far left, we decide whether to proceed left or right to the bottom. Actually, in this case, both are equally good. The the minimum cost is 15 (9+6).

We want to remember the 15. See the bullet list above. So we write 15 in place of the original value (which actually is no longer needed). We have also changed the color of the square to yellow so that we can easily see which squares have been modified.

Proceeding across the row each square's value is replaced by the minimum path value.

And once a row complete we just back up the next higher row and repeat the process.

Eventually, we reach the top the solution written in the top square.

Now, we know the minimum cost but we've lost the path to get there. There are a couple of ways around this. One would be to write more information at each square, perhaps a tuple containing both the minimum cost from this point and a value indicating whether to proceed left or right. But this is easily inferred. Just choose the one with the lower (new) minimum value. If they are the same choose one at random. Whenever this happens it means there are multiple paths from the top with the same minimum solution.

Another classic problem in dynamic programming. This hits pretty close to home for me right now. I'm moving to Europe and will take 2 suitcases along, each of which can weigh no more than 50lbs. I have more than 100lbs of stuff I'd like to take but with the weight limit I have to assign values to items and optimize the value of the contents of the suitcases.

The problem is similar to the minimum path problem but the breakdown of the subproblems is different. The suitcase may hold 50lbs when empty but if it is partially full, then there is only some unused weight to use. The subproblems look like the following.

We look at every increment of available weight in the suitcase combined with a set of items (numbered 1 to n) which is optimally loaded. We then see if item n+1 may be added if there is space OR if adding and bumping an earlier item (or items) results in a better (more optimal) packing.

The tricky part is actually choosing what is best to bump. But the subproblems will be solved in an order that readily provides an answer.

We'll work with an example using 4 items and a knapsack that holds a maximum
of 10 kilograms. We can view the algorithm with the following. We have two tables,
the left one showing the items to load along with their weights and values. The table
on the right computes and shows optimal values for each combination of *available*
weight and a set of items from 1 to n where n is the item numbered in the row in the left table. These optimals are sometimes used for later computations as we'll see.

Starting with the row for item 1 we scan left to right. Item 1 weighs 5kg so until we have that much available space it can't be added. Then (see pink) we can. So we place a value of 10 (item one's value) into the square and give it the color pink.

Continuing toward the right (see below) nothing changes because we are only working with item one. When the row is finished we start on the next row now with items #1 and #2. From 0kg to 3kg, both items are too heavy, but when the available capacity is 4kg item 2 may be packed.

This continues up to a capacity of 9kg when suddenly both items #1 and #2 will fit.

The actual algorithm (see code) thinks of this a little differently. When it scans a row it calculates at each column whether to keep the new item or not. Keeping the item necessitates finding the best way to fill the rest of the sack with earlier items. But these "best ways" have already been computed. So where the optimum value pops to 50, it's because keeping item 2 left 5kg still available and the optimal value for 5kg and just item 1 is that very first '10' we loaded.

For columns where the new item is not kept, the optimum value is the one directly above.

On to the row for item #3. Until we reach the 6kg column item #3 is simply too big so it is not kept. The value directly above is copied down. Item #3 is also not kept at 6kg since it would leave 0kg available and its value of 30 is not good enough to give up the 40 above. But at 10kg keeping item #3 leaves 4kg available. So we look at the row above in the 4kg column and add that 40 to the 30 for item #3. The 70 is greater than the 50 above so item #3 is kept. We don't know it yet but item #1 has been displaced.

Item 4 has both the least weight and most value so we would expect that it is always kept. And it is.

Now we will a walk somewhat backwards and see what items were kept to give us a final value of 90. We start at the lower right corner. Above the 90 is a 70 so since the 70 was not simply copied down item #4 was kept. We paint it green.

Now with 7kg (10-3) we go one row up (for items #3, #2, #1) in the 7kg column. It appears there was no advantage to keeping item #3 so we paint it yellow. Moving up a row (for items #2 and #1) and still at 7kg we will keep item #2 since the square above has a lower value. That leaves us with 3kg. In the row for item #1 alone 3kg is a no go. So the items kept (in green) are #2 and #4, with 3kg left over.

In another Python for Fun project. For now you can see the minpath in action or knapsack in action each as a series of screen shots. Just click the forward (and previous) buttons to step through the steps of the solution. There is also a slider to rapidly move across the screen shots.

*Elsewhere* is the SPT project (Standalone Python Tutor)
where you see how such visualizations are created. It includes both
minpath.py and
knapsack.py
which contain the logic above as well as code to produce the HTML snapshots.

Copyright © 2014-2015 Chris Meyers