Some complex issues in life can be solved with the help of dynamic programming; math problems can do it, but love cannot. Solving these types of problems will significantly increase your skills and, after all, save valuable time. This article from xx will help you understand how to use DP to solve problems. __Algo.Monster__ comes with some examples because the original theory is hard to understand, so we hope that you who have not studied this subject can read it.

## What is dynamic programming?

Before discussing all the details of solving dynamic programming problems, we want to answer a fundamental core question.

**What is dynamic programming?**

**What is dynamic programming?**

__Dynamic programming__ refers to obtaining a total benefit of the optimization process under certain conditions after using each stage is determined by the specific nature of the problem. The critical thing to note in applying dynamic programming is the division of phases based on analyzing the problem and finding a reasonable way of dividing the stages (subproblems). Each subproblem is a much simpler optimization problem than the original one. Each subproblem is a much simpler optimization problem than the original one. And using the optimization result of the one solves each subproblem until the last subproblem is the optimal solution, that is, the optimal solution of the original problem.

However, dynamic programming does not work for every problem. There are many cases where dynamic programming does not help us improve the running time of the problem. If we are not doing repetitive work, then no amount of caching will make any difference.

To determine if we can optimize a problem with dynamic programming, we can look at two formal criteria for emotional programming problems.

**How can we determine if it is dynamic programming?**

**How can we determine if it is dynamic programming?**

To know if we can optimize a problem with dynamic programming, it is essential to look at whether it:

- has an optimal substructure.
- has overlapping subproblems.

If a problem meets these two criteria, we know dynamic programming can resolve it.

**Optimal substructure**

**Optimal substructure**

The optimal substructure is a core property of dynamic programming problems and recursion in general. If a problem can be solved recursively, then it most likely has an optimal substructure.

Optimal substructure means that you can find the optimal solution to a problem by considering the optimal solution to a subproblem of the problem.

For example, if we are looking for the shortest path in a graph and know the partial approach to the endpoint, we can compute the shortest path from the start to the endpoint without knowing any slash details.

**Overlapping subproblems**

**Overlapping subproblems**

Overlapping subproblems are the second fundamental property that our problem must have to optimize using dynamic programming. Having overlapping subproblems means that we have to perform more than one computation on the same problem.

Imagine that you have a server that caches images. If the same idea is requested repeatedly, you will save a lot of time. However, what is the benefit of caching them if no one asks the same image multiple times? That is what is happening here. If we didn’t have overlapping subproblems, there would be nothing stopping us from caching the values. It just doesn’t improve our runtime at all. All it does is create more work for us.

**When should I use dynamic programming?**

**When should I use dynamic programming?**

To be sure that we can use dynamic programming to solve a problem, we must test for optimal substructures and overlapping subproblems. Without these, we cannot use dynamic programming.

Since its introduction, dynamic programming has been widely used in economic management, production scheduling, engineering, and optimal control. Examples include shortest routes, inventory management, resource allocation, equipment renewal, sequencing, loading, and other problems.

However, we can use heuristics to make a very accurate guess of whether we should consider using dynamic programming. Thus, this quick question can save us a lot of time.

All we have to ask is: Can solving a combinatorial problem solve this problem?

Consider a few examples.

**The Gold Coin Problem**

**The Gold Coin Problem**

Find the minimum number of coins needed to earn a specific amount of change. Then, look at all the combinations of coins that add up to an amount and count the minimum number.

Find the maximum value of the items that can fit in your backpack. Then, find all combinations of objects and determine the highest value combination.

**Path Problem**

**Path Problem**

Find the number of different paths to the top of the stairs. List all combinations of steps.

While this heuristic is not suitable for all dynamic programming problems, it does give you a quick way to examine the issue to decide if you want to dive in.

**Two approaches to dynamic programming**

**Two approaches to dynamic programming**

There are two approaches to dynamic programming. The first is a top-down approach, and the second is a bottom-up approach. Let’s take a closer look at these two approaches.

**What is top-down?**

**What is top-down?**

We start naturally solving the problem and store the solutions of the subproblems along the way. We also use the term memoization, which comes from the word memo.

In other words, we just naturally hit the problem and hope that the solution to the subproblem has been computed. And if it has not been calculated, we add it along the way.

**What is bottom-up?**

**What is bottom-up?**

We solve the problem from the bottom, i.e., from calculating the second term, then the third term, and so on, and finally figure the higher words on top of that, i.e., those values.

Moreover, the order of problem-solving can be flexible according to the needs of the problem and is not fixed. Therefore, we can solve the problem in any desired order.

Let’s compare the mnemonic and tabular methods to see the advantages and disadvantages of both.

**Memo V/S Tabulation**

**Memo V/S Tabulation**

Tabulation: Bottom Up

Memoization: Top Down

Memorization is the natural way to solve problems, so when dealing with a complex issue, it is easier to code in memoization. However, when dealing with many conditions, a specific order can be problematic in the tabulation.

Also, consider the case when we don’t need to find solutions to all subproblems. In this case, we prefer to use a mnemonic approach.

However, memoization may cause memory problems because it may stack recursive calls to find solutions to more resounding recursive calls.