# House Robber Problems

**Translator**: cooker

**Author**: labuladong

I find that this series of problems are highly praised. They are representative and skillful dynamic planning problems. Today I will introduce an generalized method to solve all of these problems.

House robber series includes three problems. The difficulty design is very reasonable and progressive. The first (house robber) is a standard dynamic programming problem.

The second (house robber ii) include incorporates the condition of a circular array.

The third (house robber iii) is quite amazing which combines the bottom-up and top-down solutions of dynamic programming with a binary tree. If you haven't done it, I highly recommend this series of problems.

## House Robber I

The problem is easy to understand and the characteristics of dynamic programming are quite obvious. We have summarized the [Dynamic programming Detailed Explanation] before, **the key points to solve the dynamic programming problem is to find [state] and [choice]**.

Imagine that you are this professional robber. You walk through this row of houses from left to right. There are two **choices** in front of each house: `rob`

or `not rob`

.

`rob`

: If you rob this house, then you **definitely** can't rob the adjacent houses, you can only start the next rob from the house after next.

`not rob`

: If you don't rob this house, then you can walk to the next house and continue making choices.

When you walked past the last house, you don't have to rob. The money you could rob is obviously 0 (**base case**).

The above logic is very simple. In fact, the **state** and **choice** have been clearly defined: **The index of the house in front of you is the ****state****, and rob or not rob is **

.**choice**

In these two choices, you need to choose a larger result each time. You end up with the most money you can rob:

Thieves have multiple choices to go to this position. Wouldn't it be a waste of time if they entered recursion every time? So there are overlapping sub-problems that can be optimized with memos:

This is the top-down dynamic programming solution. We can also modify it slightly and write the **bottom-up** solution:

We also found that the state transition is only related to the two recent states of `dp [i]`

, so it can be further optimized to reduce the space complexity to O(1).

The above process has been explained in detail in [Dynamic programming Detailed Explanation]. I believe that everyone can catch it. I think the next problem is more interesting, and we need to make some clever changes based on our current thinking.

## House Robber II

This question is basically the same as the first description. The robber still cannot rob adjacent houses. The input is still an array, but these houses are not in a row but arranged in a **circle**.

In other words, the first house and the last house are also adjacent and cannot be robbed at the same time. For example, if the input array `nums = [2,3,2]`

, the result returned by the algorithm should be 3 instead of 4, because the beginning and end cann't be robbed at the same time.

It seems that this constraint should not be difficult to solve. We mentioned a solution for circular arrays in [a monotonic stack solve Next Greater Number]. So how to deal with this problem?

First of all, the first and last rooms cannot be robbed at the same time, then there are only three possible situations: case I, either they are not robbed; case II the first house is robbed and the last one is not robbed; case III, the first house is not robbed and the last one is robbed;

That's easy. The solution is the maximum of these three cases. However, in fact, we don't need to compare three cases, just compare case II and case III. **Because these two cases have more room to choose than the case I, the money in the house is non-negative. So the optimal decision result is certainly not small if we have more choice**.

So just modify the previous solution slightly:

At this point, the second problem has also been solved.

## House Robber III

The overall thinking hasn't changed at all, we need to choose the option of robbing or not robbing, and make choice with higher returns. We can even write the code directly according to this routine:

This problem is solved, the time complexity O (N), `N`

is the number of nodes.

But what makes me think that this problem is clever is that there are more beautiful solutions. For example, here is a solution I saw in the comment:

The time complexity is O (N). The space complexity is only the space required by the recursive function stack, and no extra space is needed for the memo.

His thinking is different from ours. He has modified the definition of recursive functions and slightly modified his thinking so that the logic is self-consistent, he still gets the correct answer, and the code is more beautiful. This is a characteristic of the dynamic programming problem that we mentioned earlier in [Different Definitions Generate Different Solutions].

In fact, this solution runs much faster than our solution, although the time complexity of the algorithm analysis level is the same. The reason is that this solution does not use additional memos, which reduces the complexity of data operations, so the actual operation efficiency will be faster.

Last updated