**What Is The Time Complexity Of Dynamic Programming Problems ?**

I always find dynamic programming problems interesting. Find a way to use something that you already know to save you from having to calculate things over and over again, and you save substantial computing time.

But just how much computing time are you saving ? One of the classic questions that you will get on any programming interview is “What is the time and space complexity of your problem?” Which means, if you start ramping up the size of the problem with bigger numbers, will the solution get a lot slower really fast, or will it take really large numbers before the solution takes too long to be practical.

The classic way of doing dynamic programming is to use memoization. Memoization (which looks a lot like memorization, but isn’t) means to store intermediate answers for later use. You are increasing the amount of space that the program takes, but making the program run more quickly because you don’t have to calculate the same answer repeatedly.

**The Solution, If You Want To Skip Over The Rest Of The Post**

The rest of this post goes through an example memoization program to show how it works and build an intuitive understanding of the time/space complexity. But if you are already familiar with those type of problems and just want the answer, it is that the time and space complexity of dynamic programming problems solved using recursive memoization are nearly always equal to each other. And they are equal to solving every sub-problem exactly one time. So if you figure out the maximum number of sub-problems, that is your solution.

A lot of these problems can be represented on a 2 dimensional graph, so the time complexity to fill every square in the graph is N * M. However that there are problems that are multi-dimensional (3, 4 or more dimensions), or take additional time complexity to solve a single square, so the N * M is not a generic answer.

**An Example Dynamic Programming Problem.**

You have a set of coins of different denominations. You have infinite quantities of each of the denominations. You also have an amount of money to make change for. How many different unique combinations of coins can you use to make exact change?

For example, if you are making change for 4 cents with coins valued 1 and 2 cents, [1, 1, 1, 1] is unique from [1,1,2] and is unique from [2,2]. However [1,1,2] is not unique from [2,1,1]. So the solution to this small example is 3 unique ways to make change.

More importantly for this blog post the question is, when this problem is solved for an arbitrary number of coins and an arbitrary amount to make change for, what is the time and space complexity of that solution, and how can it be generalized.

Here is a larger example to see how we can solve this problem with a recursive algorithm.

For this example, assume that you have that you have 6 cents, and can make change in the denominations 1 cent, 2 cents, or 5 cents.

The recursive solution to this problem is to call the function repeatedly and each time branch into two more calls.

- On one branch subtract the largest coin from the amount remaining to make change from.
- On the other branch remove the largest coin from the list of available coins used to make change.
- If you run out of coins or get negative change, then that branch is null.
- If you end up with exactly zero money remaining, that is a solution to count.

Here is a graph of the recursive branching, without using any memorization. (click on it for full sized image)

That was the solution without memoization. This was a small problem and it quickly blew up into a lot of work, as is the nature of exponential type solutions. With memorization we get (click for full sized image)

In this chart we can see a lot fewer branches, because we re-used some of the data that we had previously computed. Clearly the memoization saved work, since we saved re-computing several sub-problems. So how many sub-problems did we have to solve?

The answer is we had to potentially solve for every quantity of our money value (N), and every count of the number of counts we have (M). This means that the solution will take N*M time.

And this highlights the key to the solution to the memoization time / space complexity. For all of these type of problems you generally have to, at least potentially, solve every sub-problem, but you will only have to do it one time. Then you can save the result to use later.

This problem had a maximum of 6 different values of change to make (1, 2, 3, 4, 5, 6) and a maximum of 3 different sets of coins [1, 2, 5], [1, 2], [1]. So we would have a maximum of 18 sub-problems to solve, and it would potentially take 18 blocks of memory to store that information once we solved them. It turns out that we had slightly less than 18 problems due to the numbers that we used, but for a generalization the number of problems is on the order of N (amount of money) * M (number of different coin denominations)

The other good way to remember how much time / memory you need to solve a recursive memoization problem is to realize that the recursion is a top-down solution. However you are solving the exact same problem as if you built the table from the bottom up. Nearly all of these type of dynamic programming problems have a bottom up solution. ( In fact I’ve never encountered on that couldn’t be solved bottom up, but they probably do exist) Often times the bottom up solution is more difficult algorithmically, but it is easier to realize how much time and space it needs.

**If You Want To Solve This Problem Yourself**

If you want to solve the coin change problem yourself, here is some sample input. Note, these all solved nearly instantaneously on my computer with memoization in place. without memoization, Example 3 will probably freeze your computer.

Example 1

- Amount of Money : 6
- Coin List – 1, 2, 5
- Number of possible combinations for change – 5

Example 2

- Amount of Money : 20
- Coin List – 1, 2, 4, 6, 8
- Number of possible combinations for change – 94

Example 3

- Amount of Money : 200
- Coin List – 1, 2, 4, 6, 8, 20, 35, 50
- Number of possible combinations for change – 258,822,120

**If You Want My Python Code**

You can download the python code I used to solve this problem here (as a zip file).