Dynamic programming

Dynamic programming is a technique for solving complicated problems that have overlapping subproblems. It works by breaking the problem into smaller subproblems, solving subproblems and building up the solution. Because the subproblems are overlapping, we save the solutions for future.

Dynamic programming most often used to solve combinatorial optimization problems, where we are looking for the best possible input to some function chosen from an exponentially large search space.

There are two parts to dynamic programming. The first part is a programming technique: dynamic programming is essentially divide and conquer run in reverse: we solve a big instance of a problem by breaking it up recursively into smaller instances; but instead of carrying out the computation recursively from the top down, we start from the bottom with the smallest instances of the problem, solving each increasingly large instance in turn and storing the result in a table. The second part is a design principle: in building up our table, we are careful always to preserve alternative solutions we may need later, by delaying commitment to particular choices to the extent that we can.

The bottom-up aspect of dynamic programming is most useful when a straightforward recursion would produce many duplicate subproblems. It is most efficient when we can enumerate a class of subproblems that doesn’t include too many extraneous cases that we don’t need for our original problem.

To take a simple example, suppose that we want to compute the n-th Fibonacci number using the defining recurrence

  • F(n) = F(n − 1) + F(n − 2)
  • F(1) = F(0) = 1.

A naive approach would simply code the recurrence up directly:

int
fib(int n)
{
    if(n < 2) {
        return 1
    } else {
        return fib(n-1) + fib(n-2);
    }
}

The running time of this procedure is easy to compute. The recurrence is

  • T(n) = T(n − 1) + T(n − 2) + Θ(1),

whose solution is Θ(an) where a is the golden ratio 1.6180339887498948482…. This is badly exponential.

Memoization

The problem is that we keep recomputing values of fib that we’ve already computed. We can avoid this by memoization, where we wrap our recursive solution in a memoizer that stores previously-computed solutions in a hash table. Sensible programming languages will let you write a memoizer once and apply it to arbitrary recursive functions. In less sensible programming languages it is usually easier just to embed the memoization in the function definition itself, like this:

int
memoFib(int n)
{
    int ret;

    if(hashContains(FibHash, n)) {
        return hashGet(FibHash, n);
    } else {
        ret = memoFib(n-1) + memoFib(n-2);
        hashPut(FibHash, n, ret);
        return ret;
    }
}

The assumption here is that FibHash is a global hash table that we have initialized to map 0 and 1 to 1. The total cost of running this procedure is O(n), because fib is called at most twice for each value k in 0…n.

Memoization is a very useful technique in practice, but it is not popular with algorithm designers because computing the running time of a complex memoized procedure is often much more difficult than computing the time to fill a nice clean table. The use of a hash table instead of an array may also add overhead (and code complexity) that comes out in the constant factors. But it is always the case that a memoized recursive procedure considers no more subproblems than a table-based solution, and it may consider many fewer if we are sloppy about what we put in our table (perhaps because we can’t easily predict what subproblems will be useful).

Dynamic programming

Dynamic programming comes to the rescue. Because we know what smaller cases we have to reduce F(n) to, instead of computing F(n) top-down, we compute it bottom-up, hitting all possible smaller cases and storing the results in an array:

int
fib2(int n)
{
    int *a;
    int i;
    int ret;
    
    if(n < 2) {
        return 1;
    } else {
        a = malloc(sizeof(*a) * (n+1));
        assert(a);

        a[1] = a[2] = 1;

        for(i = 3; i <= n; i++) {
            a[i] = a[i-1] + a[i-2];
        }
    }

    ret = a[n];
    free(a);
    return ret;
}

Notice the recurrence is exactly the same in this version as in our original recursive version, except that instead of computing F(n-1) and F(n-2) recursively, we just pull them out of the array. This is typical of dynamic-programming solutions: often the most tedious editing step in converting a recursive algorithm to dynamic programming is changing parentheses to square brackets. As with memoization, the effect of this conversion is dramatic; what used to be an exponential-time algorithm is now linear-time.

Example 1: Longest increasing subsequence

Suppose that we want to compute the longest increasing subsequence of an array. This is a sequence, not necessarily contiguous, of elements from the array such that each is strictly larger than the one before it. Since there are 2n different subsequences of an n-element array, the brute-force approach of trying all of them might take a while.

What makes this problem suitable for dynamic programming is that any prefix of a longest increasing subsequence is a longest increasing subsequence of the part of the array that ends where the prefix ends; if it weren’t, we could make the big sequence longer by choosing a longer prefix. So to find the longest increasing subsequence of the whole array, we build up a table of longest increasing subsequences for each initial prefix of the array. At each step, when finding the longest increasing subsequence of elements 0…i, we can just scan through all the possible values for the second-to-last element and read the length of the best possible subsequence ending there out of the table. When the table is complete, we can scan for the best last element and then work backwards to reconstruct the actual subsequence.

This last step requires some explanation. We don’t really want to store in table[i] the full longest increasing subsequence ending at position i, because it may be very big. Instead, we store the index of the second-to-last element of this sequence. Since that second-to-last element also has a table entry that stores the index of its predecessor, by following the indices we can generate a subsequence of length O(n), even though we only stored O(1) pieces of information in each table entry. This is similar to the parent pointer technique used in graph search algorithms.

Here’s what the code looks like:

/* compute a longest strictly increasing subsequence of an array of ints */
/* input is array a with given length n */
/* returns length of LIS */
/* If the output pointer is non-null, writes LIS to output pointer. */
/* Caller should provide at least sizeof(int)*n space for output */
/* If there are multiple LIS's, which one is returned is arbitrary. */
unsigned long
longest_increasing_subsequence(const int a[], unsigned long n, int *output);

examples/dynamicProgramming/lis/lis.h

#include <stdlib.h>
#include <assert.h>

#include "lis.h"

unsigned long
longest_increasing_subsequence(const int a[], unsigned long n, int *output)
{
    struct lis_data {
        unsigned long length;             /* length of LIS ending at this point */
        unsigned long prev;               /* previous entry in the LIS ending at this point */
    } *table;

    unsigned long best;      /* best entry in table */
    unsigned long scan;      /* used to generate output */

    unsigned long i;            
    unsigned long j;
    unsigned long best_length;

    /* special case for empty table */
    if(n == 0) return 0;

    table = malloc(sizeof(*table) * n);

    for(i = 0; i < n; i++) {
        /* default best is just this element by itself */
        table[i].length = 1;
        table[i].prev = n;              /* default end-of-list value */

        /* but try all other possibilities */
        for(j = 0; j < i; j++) {
            if(a[j] < a[i] && table[j].length + 1 > table[i].length) {
                /* we have a winner */
                table[i].length = table[j].length + 1;
                table[i].prev = j;
            }
        }
    }

    /* now find the best of the lot */
    best = 0;

    for(i = 1; i < n; i++) {
        if(table[i].length > table[best].length) {
            best = i;
        }
    }

    /* table[best].length is now our return value */
    /* save it so that we don't lose it when we free table */
    best_length = table[best].length;

    /* do we really have to compute the output? */
    if(output) {
        /* yes :-( */
        scan = best;
        for(i = 0; i < best_length; i++) {
            assert(scan >= 0);
            assert(scan < n);

            output[best_length - i - 1] = a[scan];

            scan = table[scan].prev;
        }
    }

    free(table);

    return best_length;
}

examples/dynamicProgramming/lis/lis.c

A sample program that runs longest_increasing_subsequence on a list of numbers passed in by stdin is given in test_lis.c. Here is a Makefile.

Implemented like this, the cost of finding an LIS is O(n2), because to compute each entry in the array, we have to search through all the previous entries to find the longest path that ends at a value less than the current one. This can be improved by using a more clever data structure. If we use a binary search tree that stores path keyed by the last value, and augment each node with a field that represents the maximum length of any path in the subtree under that node, then we can find the longest feasible path that we can append the current node to in O(log n) time instead of O(n) time. This brings the total cost down to only O(nlog n).

Example 2: All-pairs shortest paths

Suppose we want to compute the distance between any two points in a graph, where each edge uv has a length uv ( + ∞ for edges not in the graph) and the distance between two vertices s and t$ is the minimum over all st paths of the total length of the edges. There are various algorithms for doing this for a particular s and t, but there is also a very simple dynamic programming algorithm known as Floyd-Warshall that computes the distance between all n2 pairs of vertices in Θ(n3) time.

The assumption is that the graph does not contain a negative cycle (a cycle with total edge weight less than zero), so that for two connected nodes there is always a shortest path that uses each intermediate vertex at most once. If a graph does contain a negative cycle, the algorithm will detect it by reporting the distance from i to i less than zero for some i.

Negative cycles don’t generally exist in distance graphs (unless you have the ability to move faster than the speed of light), but they can come up in other contexts. One example would be in currency arbitrage, where each node is some currency, the weight of an edge uv is the logarithm of the exchange rate from u to v, and the total weight of a path from s to t gives the logarithm of the number of units of t you can get for one unit of s, since adding the logs along the path corresponds to multiplying all the exchange rates. In this context a negative cycle gives you a way to turn a dollar into less than a dollar by running it through various other currencies, which is not useful, but a positive cycle lets you pay for the supercomputer you bought to find it before anybody else did. If we negate all the edge weights, we turn a positive cycle into a negative cycle, making a fast algorithm for finding this negative cycle potentially valuable.

However, if we don’t have any negative cycles, the idea is that we can create restricted instances of the shortest-path problem by limiting the maximum index of any node used on the path. Let L(i, j, k) be the length of a shortest path from i to j that uses only the vertices 0, …, k − 1 along the path (not counting the endpoints i and j, which can be anything). When k = 0, this is just the length of the ij edge, or  + ∞ if there is no such edge. So we can start by computing L(i, j, 0) for all i. Now given L(i, j, k) for all i and some k, we can compute L(i, j, k + 1) by observing that any shortest ij path that has intermediate vertices in 0…k either consists of a path with intermediate vertices in 0…k − 1, or consists of a path from i to k followed by a path from k to j, where both of these paths have intermediate vertices in 0…k − 1. So we get

  • L(i, j, k + 1) = min (L(i, j, k), L(i, k, k) + L(k, j, k).

Since this takes O(1) time to compute if we have previously computed L(i, j, k) for all i and j, we can fill in the entire table in O(n3) time.

Implementation details:

  • If we want to reconstruct the shortest path in addition to computing its length, we can store the first vertex for each ij path. This will either be (a) the first vertex in the ij path for the previous k, or (b) the first vertex in the ik path.
  • We don’t actually need to use a full three-dimensional array. It’s enough to store one value for each pair i, j and let k be implicit. At each step we let L[i][j] be min (L[i][j], L[i][k] + L[k][j]). The trick is that we don’t care if L[i][k] or L[k][j] has already been updated, because that will only give us paths with a few extra k vertices, which won’t be the shortest paths anyway assuming no negative cycles.

Example 3: Longest common subsequence

Given sequences of characters v and w, v is a subsequence of w if every character in v appears in w in the same order. For example, aaaaa, brac, and badar are all subsequences of abracadabra, but badcar is not. A longest common subsequence (LCS for short) of two sequences x and y is the longest sequence that is a subsequence of both: two longest common subsequences of abracadabra and badcar are badar and bacar.

As with longest increasing subsequence, one can find the LCS of two sequence by brute force, but it will take even longer. Not only are there are 2n subsequences of a sequence of length n, but checking each subsequence of the first to see if it is also a subsequence of the second may take some time. It is better to solve the problem using dynamic programming. Having sequences gives an obvious linear structure to exploit: the basic strategy will be to compute LCSs for increasingly long prefixes of the inputs. But with two sequences we will have to consider prefixes of both, which will give us a two-dimensional table where rows correspond to prefixes of sequence x and columns correspond to prefixes of sequence y.

The recursive decomposition that makes this technique work looks like this. Let L(x, y) be the length of the longest common subsequence of x and y, where x and y are strings. Let a and b be single characters. Then L(xa, yb) is the maximum of:

  • L(x, y) + 1, if a = b,
  • L(xa, y), or
  • L(x, yb).

The idea is that we either have a new matching character we couldn’t use before (the first case), or we have an LCS that doesn’t use one of a or b (the remaining cases). In each case the recursive call to LCS involves a shorter prefix of xa or yb, with an ultimate base case L(x, y) = 0 if at least one of x or y is the empty string. So we can fill in these values in a table, as long as we are careful to make sure that the shorter prefixes are always filled first. If we are smart about remembering which case applies at each step, we can even go back and extract an actual LCS, by stitching together to places where a = b. Here’s a short C program that does this:

#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <string.h>
#include <limits.h>

/* compute longest common subsequence of argv[1] and argv[2] */

/* computes longest common subsequence of x and y, writes result to lcs */
/* lcs should be pre-allocated by caller to 1 + minimum length of x or y */
void
longestCommonSubsequence(const char *x, const char *y, char *lcs)
{
    int xLen;
    int yLen;
    int i;             /* position in x */
    int j;             /* position in y */

    xLen = strlen(x);
    yLen = strlen(y);

    /* best choice at each position */
    /* length gives length of LCS for these prefixes */
    /* prev points to previous substring */
    /* newChar if non-null is new character */
    struct choice {
        int length;
        struct choice *prev;
        char newChar;
    } best[xLen][yLen];

    for(i = 0; i < xLen; i++) {
        for(j = 0; j < yLen; j++) {
            /* we can always do no common substring */
            best[i][j].length = 0;
            best[i][j].prev = 0;
            best[i][j].newChar = 0;

            /* if we have a match, try adding new character */
            /* this is always better than the nothing we started with */
            if(x[i] == y[j]) {
                best[i][j].newChar = x[i];
                if(i > 0 && j > 0) {
                    best[i][j].length = best[i-1][j-1].length + 1;
                    best[i][j].prev = &best[i-1][j-1];
                } else {
                    best[i][j].length = 1;
                }
            }

            /* maybe we can do even better by ignoring a new character */
            if(i > 0 && best[i-1][j].length > best[i][j].length) {
                /* throw away a character from x */
                best[i][j].length = best[i-1][j].length;
                best[i][j].prev = &best[i-1][j];
                best[i][j].newChar = 0;
            }

            if(j > 0 && best[i][j-1].length > best[i][j].length) {
                /* throw away a character from x */
                best[i][j].length = best[i][j-1].length;
                best[i][j].prev = &best[i][j-1];
                best[i][j].newChar = 0;
            }

        }
    }

    /* reconstruct string working backwards from best[xLen-1][yLen-1] */
    int outPos;        /* position in output string */
    struct choice *p;  /* for chasing linked list */

    outPos = best[xLen-1][yLen-1].length;
    lcs[outPos--] = '\0';

    for(p = &best[xLen-1][yLen-1]; p; p = p->prev) {
        if(p->newChar) {
            assert(outPos >= 0);
            lcs[outPos--] = p->newChar;
        }
    }
}

int
main(int argc, char **argv)
{
    if(argc != 3) {
        fprintf(stderr, "Usage: %s string1 string2\n", argv[0]);
        return 1;
    }

    char output[strlen(argv[1]) + 1];

    longestCommonSubsequence(argv[1], argv[2], output);

    printf("\"%s\" (%zu characters)\n", output, strlen(output));

    return 0;
}

examples/dynamicProgramming/lcs/lcs.c

The whole thing takes O(nm) time where n and m are the lengths of A and B.


Licenses and Attributions


Speak Your Mind

-->