Jump Game II
- mosesg1123
- 3 days ago
- 6 min read
Jump Game II is a classic follow-up to the original Jump Game problem. It’s not just about whether you can reach the end... now you have to do it in the fewest jumps possible! That small change turns a simple reachability problem into one that tests how well you can optimize greedy strategies or dynamic programming under pressure.
Be sure to read our previous Jump Game post first!
Problem Statement
You are given a 0-indexed array of integers nums where nums[i] represents the maximum jump length from index i.
Your goal is to reach the last index in the minimum number of jumps.
You can assume that you will always be able to reach the last index.
Example 1:
Input: nums = [2,3,1,1,4]
Output: 2
Explanation: Jump 1 step from index 0 to 1, then 3 steps to the last index.
Example 2:
Input: nums = [2,3,0,1,4]
Output: 2
1. Clarify Requirements Before Jumping Into Code
Are all numbers in the array non-negative? → Yes.
Can nums[i] be zero? → Yes.
Is it guaranteed that the last index is reachable? → Yes.
Do we need to return the actual path or just the count of jumps? → Just the count.
2. Identify the Category of the Question
Since this is an evolution of the Jump Game problem, we can assume that the possible solutions may rely on the same techniques: Greedy algorithms and Dynamic Programming.
The question also asks us to minimize something, or to put it another way, to find the "shortest path." That's a potential hint that we may want to consider a BFS-based solution as well.
The key ideas:
We're trying to minimize something, which hints at DP or BFS.
We’re making forward “jumps” with maximum ranges, which brings greedy into play.
Common patterns:
BFS over ranges
Greedy exploration of intervals
Range-based decision-making
3. Consider a Brute‑Force Approach to Understand the Problem
Our brute-force way is to simulate every possible jump from the current index and take the minimum result using recursion:
minJumps(i) = 1 + minJumps(i + 1 -> i + nums[i])
This would explore all possible jump paths and select the one with the fewest steps.
Time Complexity: Exponential (O(n^n))
Space Complexity: O(n) due to recursion
Clearly not viable for large arrays.
4. Brainstorm More Solutions
Option 1: Top-Down DP
Let's think about how we can optimize on our recursive solution. One thing that sticks out is that there are a lot of repeat calculations - we are potentially rechecking the same jumps many times as we work our way down. We know that Dynamic Programming improves over recursion when there are a lot of repeat calculations in our recursive calls. It does that by storing calculations and reusing them as necessary.
What does that look like? It means passing around a memoization array - let's call it memo - down through the recursive calls. When we determine the minimum jumps to a particular index, we use that value instead of recalculating it with another recursive call.
To illustrate, all this means our minJumps method will require code that looks something like:
if memo[i] is not null:
return memo[i]
...
for step from 1 to nums[i]:
if i + step < length(nums):
minSteps = min(minSteps,
1 + minJumps(i + step, nums, memo))
memo[i] = minSteps
return memo[i]
Time Complexity: O(n²) since we have one recursive call for every index in the array, and every index could potentially be scanning the entire array
Option 2: Bottom-Up DP
The other way to approach a Dynamic Programming problem is bottom-up. We would iterate over our array from front to back, and like in Option 1, we would compute the minimum number of jumps to reach a particular index and store the result in an array (let's call it dp) for future reference.
So we'll initialize dp[0] = 0, since no jumps are needed to start
Then, for every index i, we look back at all the previous indices 0 to i - 1 (we'll represent this with the variable j).
What are we checking as we look back? We check to see if we can reach our current index i from the j position.
If we can reach i from j, then we have an opportunity to potentially update the minimum jump value for that i, which is stored in dp[i]. We just use whichever is smaller: the current minimum jumps stored in dp for position i, or the new minimum jump candidate, dp[j] + 1.
If that's confusing, don't worry - pseudo-code should help to illustrate the idea:
function jump(nums):
dp = array of size n, initialize all values to ∞
dp[0] = 0 // No jumps needed to start at index 0
for i from 1 to n - 1:
for j from 0 to i - 1:
if j + nums[j] >= i: // we can reach i from j
dp[i] = min(dp[i], dp[j] + 1) //check for a new min
return dp[n - 1]
Time Complexity: O(n²) since we've got two nexted loops. Unfortunately we haven't really improved on our runtime with this option.
Option 3: The Greedy Approach
It's worth taking a moment to review the optimal solution we discovered in our related Jump Game post.
In that solution, our goal was to try and determine whether or not it's possible to achieve a maxDistance value that is equal to or larger than the size of our array. We can do that in O(n) time with one pass of our array. Let's assume we can reuse that same idea here in some way.
Let's define that as a variable to help order our thinking: we know we're tracking the maximum distance we are able to jump: farthest.
We also know we want to count the number of times we have jumped, so let's give that a variable as well: jumps.
The question is, how do we know when we need to increment our jumps? Or in other words, when do we jump?
We can use a greedy mindset here: jump when we've gone as far as we possibly can. We can track that window using another variable, currentEnd. We'll set it equal to our farthest variable, and jump only when we are forced to: when i == currentEnd.
Let's walk through a quick example to illustrate. For [2,3,1,1,4]:
i = 0
farthest = 2, so currentEnd = 2
i = 1
farthest = Math.max(farthest, i + nums[i]) = Math.max(2, 4) = 4
i does not equal currentEnd 2, so we continue
i = 2
farthest = Math.max(farthest, i + nums[i]) = Math.max(4, 3) = 4
i == currentEnd, so
jump++ (now equals 1)
currentEnd = farthest = 4
i = 3
farthest = Math.max(farthest, i + nums[i]) = Math.max(4, 4) = 4
i does not equal currentEnd 4, so we continue
i = 4
farthest = Math.max(farthest, i + nums[i]) = Math.max(4, 8) = 8
i == currentEnd, so
jump++ (now equals 2)
currentEnd = farthest = 8
We've reached the end of the array, so we return the value of jumps, which is 2!
Time Complexity: O(n) since we are able to calculate our result in one pass through the array
5. Discuss Trade‑Offs Between Your Solutions
Approach | Time Complexity | Space Complexity | Pros | Cons |
Brute Force DFS | O(2^n) | O(n) | Simple to understand | Way too slow |
Memoized Recursion | O(n²) | O(n) | More efficient | Still too slow |
Bottom-Up DP | O(n²) | O(n) | Structured and iterative | Still too slow |
Greedy BFS Layering | O(n) | O(1) | Fast, elegant | Slightly tricky to derive |
6. Write Pseudocode to Structure Your Thoughts
jumps = 0
currentEnd = 0
farthest = 0
for i from 0 to n - 2:
farthest = max(farthest, i + nums[i])
if i == currentEnd:
jumps += 1
currentEnd = farthest
return jumps
7. Consider Edge Cases
Single-element array → already at the end → 0 jumps
All elements are 1 → linear jumps needed
First element is n-1 → only 1 jump
Large array with optimal early jump → still must walk the logic
8. Write Full Code Syntax (Java)
public class Solution {
public int jump(int[] nums) {
int jumps = 0;
int currentEnd = 0;
int farthest = 0;
for (int i = 0; i < nums.length - 1; i++) {
farthest = Math.max(farthest, i + nums[i]);
if (i == currentEnd) {
jumps++;
currentEnd = farthest;
}
}
return jumps;
}
}
9. Test Your Code
System.out.println(jump(new int[]{2,3,1,1,4})); // 2
System.out.println(jump(new int[]{2,3,0,1,4})); // 2
System.out.println(jump(new int[]{1,1,1,1})); // 3
System.out.println(jump(new int[]{1})); // 0
System.out.println(jump(new int[]{5,9,3,2,1,0,2,3,3,1,0,0})); // 3
10. Key Lessons to Remember for Future Questions
When minimizing something (jumps, steps, moves), especially when it seems like you can break the problem down into smaller sub problems, Dynamic Programming is a good option to consider
Pay attention to when to increment a counter - often at the edge of your current range.
Sometimes, you don’t need full DP when a smart greedy strategy does the job.
This pattern (expanding range per step) shows up in other problems like Sliding Window Maximum or Rotting Oranges.
Good Luck and Happy Coding!
Comments