Jump Game
- mosesg1123
- 4 days ago
- 7 min read
The "Jump Game" question is a popular one for interviews because it tests your ability to think greedily and work with dynamic movement through an array. It's a great warm-up for range-based greedy logic and helps build intuition for reachability problems, concepts that show up often in competitive coding and systems design.
Problem Statement
You are given an array of non-negative integers nums. Each element nums[i] represents the maximum jump length from that position.
Return true if you can reach the last index, or false otherwise.
Example 1:
Input: nums = [2,3,1,1,4]
Output: true
Explanation: Jump 1 step from index 0 to 1, then 3 steps to the last index.
Example 2:
Input: nums = [3,2,1,0,4]
Output: false
Explanation: You will always arrive at index 3 with a maximum jump length 0. There is no way to move forward.
1. Clarify Requirements Before Jumping Into Code
Are all numbers in the array non-negative? → Yes.
Are we allowed to skip indices or must we land on every one? → Skipping is allowed.
Do we always start at index 0? → Yes.
What’s the size of the array? → Up to 10⁴ elements.
2. Identify the Category of the Question
This is a Greedy / Dynamic Programming problem.
The problem involves:
Forward planning based on local jump capacity
Tracking the range of reachability
Making optimal local decisions to achieve a global goal
Common patterns:
Greedy reach updates
Forward simulation
Early exits when progress becomes impossible
3. Consider a Brute‑Force Approach to Understand the Problem
For the brute-force solution, we try all possible jumps. We would start at index 0 and recursively try all possible jumps from that position.
This would look like:
From index i:
Try all positions from i+1 to i+nums[i]
For each jump, recurse and try again.
Time Complexity: O(n^n) , which is far too inefficient
4. Brainstorm More Solutions
It's hard to think about a solution for a longer array because there are so many different paths to jump from the first index to the last.
But one thing is immediately obvious: our ability to reach the last index is directly dependent on our ability to reach the previous indexes. This bring to mind a common programming technique - Dynamic Programming.
The hallmark of Dynamic Programming is a problem where the solution can be built up from the solution to subproblems. It's sort of like recursion in that sense. Dynamic Programming however is able to optimize on recursion by remembering the solutions to subproblems so that we can avoid having to recalculate answers we've already calculated.
Dynamic Programming can take a ground-up or a top-down approach.
A top-down approach would be like if we took our recursive approach and saved the answers to our subproblems in an array that we took everywhere with us. Then we can reference those answers as necessary and avoid recalculations.
A bottom-up approach would be like if we started at the first index of our array and built-up our solution from scratch.
The second approach feels most similar to our brute-force solution and how we would solve the problem in real-life: we would start at the first index and see how far we could jump. So let's start by trying to find a bottom-up solution.
Option 1: DP approach
So now the question becomes: what is the sub-problem we are trying to solve, and how do we store the result to that sub-problem?
We already know the answer to the first part: like we noted earlier, our ability to reach the last index is directly dependent on our ability to reach the previous indexes. That's our sub-problem.
And since the only result the question requires is a true/false response, that's what we store: a simple boolean in an array, where the value is a true/false to represent whether or not that index is reachable.
So let’s say we have a boolean array dp, where dp[i] indicates whether index i is reachable. Let's try filling it in from the bottom-up.
We know one thing immediately: dp[0] can be set to true, because the first index is always reachable. So let's initialize dp[0] = true and the rest can be false.
What else do we know at index 0? We know how far we can jump from 0: that's the value of nums[0]. So we can go ahead and set all of those indexes to true as well.
In other words, we can set all dp[j] to true where j is all values from i+1 to i+nums[i]
Next we move on to index 1. Now, we already know whether or not index 1 is reachable - we figured that out at index 0. So what do we do with that information?
If index 1 is reachable, then we can simply repeat what we did at index 0: set all dp[j] to true where j is all values from i+1 to i+nums[i] .
For example, if nums[i] = 2, then i+1 = 1+1 = 2, and i+nums[i] = 1 + 2 = 3.
So, we set all index from 2 to 3 equal to true.
But what if nums[i] were false? What does it mean if we find an index that can't be reached?
Well, then we're just done and can return false! Why? Think about the way we are building up our solution:
We are essentially checking every possible path to a particular index, so if we find an unreachable index, that means every possible path to that index came up short.
We know we don't skip over an index when we're filling in true in our dp array, which means if one index is false, we also know every index after the unreachable index will also hold a value of false
And we know that the value stored in our unreachable index doesn't matter, since we can't jump to it anyway.
So there we have our logic!
Track reachable index in an array dp, which we initiliaze with dp[0] = true and all other values false.
Iterate through our array. If dp[i] = true, set all dp[j] to true where j is all values from i+1 to i+nums[i]
If dp[i] = false, then return false.
Time Complexity: O(n²)
This works, but whenever we are dealing with an array, we should always ask ourselves: is there a way to solve this in linear time?
Option 2: Greedy Approach
As mentioned in option 1, we are essentially tracking every possible path to every possible index. That's a lot of work. We should ask ourselves: is that work really necessary? Do we really need to check every possible path to an index?
Not really. All we really need to know is if we can jump far enough to reach the end of the array. As we iterate through our array, every index gives us a new possible jump distance, but that new jump distance only matters if it can get us farther than our previous jump distances. If a previous path helped us go farther, then it's not really a factor in our solution.
Let's try and formalize our thinking here a bit. It sounds like we want to focus on a key value here: the maximum distance that we're able to jump. Let's call it maxDistance.
Our goal then is to try and determine whether or not it's possible to achieve a maxDistance value that is equal to or larger than the size of our array.
Now what do we do with that maxDistance value when we process a new index? Well, we can make one of two choices: we can keep our previous maxDistance; or, if the current element makes it possible to jump farther, than we use that as our new maxDistance.
In other words, we're choosing between maxDistance and i + nums[i].
And In true greedy fashion, we pick whichever one is bigger.
But we also need to remember that it's possible to get stuck if all possible paths lead to a dead-end: an index where our jump distance is 0, meaning we can't proceed further from that index.
What does that mean in terms of our maxDistance value? It means if at any point, i is greater than our maxDistance value, then we've found an index in the array that we can't possibly reach.
And there's our logic!
Initialize maxDistance = 0
Iterate through the array
If i > maxDistance, we’ve hit a point we can’t reach → return false
Else, update maxReach = max(maxReach, i + nums[i])
If the loop completes, we can reach the end → return true
Let’s walk through [2,3,1,1,4]
i = 0, maxReach = max(0, 0+2) = 2
i = 1, maxReach = max(2, 1+3) = 4
i = 2, maxReach = max(4, 2+1) = 4
i = 3, maxReach = max(4, 3+1) = 4
i = 4, maxReach = max(4, 4+4) = 8
We've reached the end, so we return success!
Time Complexity: O(n) since we only ever visit each element once!
5. Discuss Trade‑Offs Between Your Solutions
Approach | Time Complexity | Space Complexity | Pros | Cons |
Brute Force | O(2^n) | O(n) | Simple to implement | Totally impractical |
DP Table | O(n²) | O(n) | Easier to visualize | Too slow |
Greedy | O(n) | O(1) | Fastest and cleanest | Slightly harder to derive |
6. Write Pseudocode to Structure Your Thoughts
maxReach = 0
for i from 0 to length of nums - 1:
if i > maxReach:
return false
maxReach = max(maxReach, i + nums[i])
return true
7. Consider Edge Cases
Single element array → always reachable
All zeros except first element → depends on first value
Array ending in 0 → only unreachable if we can’t jump over it
max jump lengths of 0 scattered throughout → needs careful reachability check
Very large input arrays → must remain O(n)
8. Write Full Code Syntax (Java)
public class Solution {
public boolean canJump(int[] nums) {
int maxReach = 0;
for (int i = 0; i < nums.length; i++) {
if (i > maxReach) return false;
maxReach = Math.max(maxReach, i + nums[i]);
}
return true;
}
}
9. Test Your Code
System.out.println(canJump(new int[]{2,3,1,1,4})); // true
System.out.println(canJump(new int[]{3,2,1,0,4})); // false
System.out.println(canJump(new int[]{0})); // true
System.out.println(canJump(new int[]{2,0,0})); // true
System.out.println(canJump(new int[]{1,0,2})); // false
10. Key Lessons to Remember for Future Questions
Recognizing when a problem can be broken down into smaller subproblems is key to identifying when your solution may rely on techniques like recursion and dynamic programming
Avoid unnecessary calculations! We got to O(n) time because we focused our attention on what mattered most (max reachable index).
Greedy strategies often work well in reachability and range-expansion problems.
Linear scans with constant memory are powerful when paired with greedy insights.
Good Luck and Happy Coding!
Commentaires