top of page

Longest Increasing Subsequence

  • 2 days ago
  • 8 min read

When Greedy Intuition Fails

Longest Increasing Subsequence sounds like a greedy problem. Just keep taking bigger numbers, right? That instinct fails fast. This problem exists to test whether you can slow down, recognize why greedy breaks, define state correctly, and build up a solution from first principles.


Problem Statement

Given an integer array nums, return the length of the longest strictly increasing subsequence.

A subsequence is derived from the array by deleting some or no elements without changing the order of the remaining elements. The subsequence does not need to be contiguous.


Examples:

  • [10, 9, 2, 5, 3, 7, 101, 18] → 4 (the subsequence [2, 5, 7, 101])

  • [0, 1, 0, 3, 2, 3] → 4 ([0, 1, 2, 3])

  • [7, 7, 7, 7] → 1 ([7])


1. Clarify Requirements Before Jumping Into Code

Before solving, make sure the rules are precise.

Input: An integer array nums.

Output: The length of the longest strictly increasing subsequence.


Key clarifications:

  • Strictly increasing - equal elements don't count.

  • Elements can be skipped - the subsequence doesn't need to be contiguous.

  • Empty array returns 0.

  • Return the length only, not the subsequence itself.

That last clarification matters: returning just the length is simpler than reconstructing the actual subsequence. Don't solve a harder problem than you're asked.


2. Identify the Category of the Problem

Our first instinct might be to reach for a greedy solution. We could scan left to right and greedily extend the current sequence whenever possible. Let's think about why that fails.


Consider [3, 5, 2, 7]. Greedy would take 3, then 5 (larger), then skip 2 (smaller), then 7 - giving [3, 5, 7] of length 3. That's correct here.

Now try [5, 1, 4, 2, 3]. Greedy takes 5, skips 1 (smaller), skips 4 (smaller than 5), skips 2, and skips 3 - giving just [5], length 1. But [1, 2, 3] has length 3. Greedy failed because it committed to a subsequence starting with 5 too early, closing off a much longer path that started smaller.


This is the core issue with greedy: committing to a larger element early can block longer subsequences that branch from smaller subsequent elements. The right choice at position i depends on what comes later - and greedy has no way to know that.


When locally optimal choices don't lead to globally optimal solutions, that's a good signal to consider a Dynamic Programming solution. We've also got another good signal for DP - overlapping subproblems where the choice at each index depends on earlier choices.


3. Start With Brute Force to Build Intuition

The brute force solution is to generate every possible subsequence (each element is either included or skipped for 2ⁿ possibilities), check which ones are strictly increasing, and return the length of the longest.

For n = 20 that's over a million subsequences. For n = 40 it's over a trillion. Completely infeasible.


But the brute force reveals something important: every element makes a binary choice - extend some previous subsequence, or start a new one. That structure of overlapping subproblems, where the choice at each index depends on earlier choices, is exactly what DP is designed for.


4. Brainstorm More Solutions

Step 1: Find the Right State Definition

When thinking about how to solve a DP problem, we start by focusing on our state. What specific subproblem or configuration would represent a partial solution to the overall problem?

So let's ask ourselves: what do we actually know about an element at some random position i? We know its value. We know its position. And we know all the elements before it. What we don't know is what comes after it - which is why greedy failed.


So let's anchor the state on what's certain: what's the longest increasing subsequence that ends at index i?


We define dp[i] = length of the longest increasing subsequence that ends at nums[i].

Why "ends at index i" specifically? Because fixing the endpoint gives us something concrete to reason about. If we know the best subsequence ending at every index before i, we can compute dp[i] by asking: which of those earlier subsequences can nums[i] extend?


Step 2: Derive the Recurrence

To compute dp[i], we need to look backward at every index j < i. If nums[j] < nums[i], then we know that nums[i] can legally extend the subsequence ending at j - giving us a subsequence of length dp[j] + 1.


Since we're looking for the longest subsequence, we focus on the best such extension:

dp[i] = max(dp[j] + 1) for all j < i where nums[j] < nums[i]

And if no such j exists (no earlier element is smaller), nums[i] starts a new subsequence of length 1. That means we can initialize every dp[i] = 1.


Complexity analysis: there are n states, each requiring O(n) work to compute (scanning all previous indices). That gives us an overall time complexity of O(n²) time, and O(n) space for the dp array.


Step 3: Trace Through an Example

Let's trace [10, 9, 2, 5, 3, 7, 101, 18]:

dp[0] = 1  (10: no earlier elements)
dp[1] = 1  (9: 10 > 9, no valid j)
dp[2] = 1  (2: 10 > 2, 9 > 2, no valid j)
dp[3] = 2  (5: nums[2]=2 < 5 → dp[2]+1=2)
dp[4] = 2  (3: nums[2]=2 < 3 → dp[2]+1=2)
dp[5] = 3  (7: nums[2]=2 < 7 → dp[2]+1=2
               nums[3]=5 < 7 → dp[3]+1=3  ← best
               nums[4]=3 < 7 → dp[4]+1=3)
dp[6] = 4  (101: all previous < 101
               best is dp[5]+1=4)
dp[7] = 4  (18: nums[2]=2, nums[3]=5, nums[4]=3, nums[5]=7 all < 18
               best is dp[5]+1=4)

Final dp array: [1, 1, 1, 2, 2, 3, 4, 4]

Answer: max(dp) = 4. The subsequence is [2, 5, 7, 101] (or [2, 3, 7, 101], or [2, 5, 7, 18] - multiple valid options of length 4 exist).

Notice how dp[2] = 1 (just the element 2 alone) became the foundation for three subsequent elements. The DP correctly found that starting from 2 led to longer subsequences than continuing from 10 or 9.


Step 4: The O(n log n) Optimization

The O(n²) solution is a pretty good answer and will probably get you through most interviews. But can we do better?


The bottleneck in our current solution is the inner loop - for each element, we are scanning everything before it. What if we could make smarter use of the information we've already computed?


Let's take a second to think about what actually determines whether an element can extend an existing subsequence? Just one thing - whether it's larger than the tail of that subsequence. We don't care about the whole subsequence. We only care about its last element.


So let's imagine grouping subsequences by their length. For all subsequences of a similar length, which one gives us the best chance of extending further? The one with the smallest tail - because a smaller tail can be extended by more future elements than a larger one.


There's the idea! For each possible subsequence length, what if we just track the smallest tail we've achieved so far. We'll call this array tails, where tails[k] holds the smallest tail element of any increasing subsequence of length k+1 that we've seen.


Now ask: does this array have any structure we can exploit? Think about what it means for tails[k] to be larger than tails[k-1]. A subsequence of length k must end with something larger than a subsequence of length k-1 - otherwise it couldn't be strictly increasing. So tails is always strictly sorted. And a strictly sorted array is exactly what binary search requires!


And there's the key insight for optimizing our algorithm: instead of scanning all previous dp values linearly, we binary search tails to find where a new element fits - O(log n) per element instead of O(n).


5. Trade-Offs at a Glance

Approach

Time

Space

Notes

Brute force

O(2ⁿ)

O(n)

Infeasible; shows subproblem structure

O(n²) DP

O(n²)

O(n)

Clear, standard, expected baseline

O(n log n) patience sort

O(n log n)

O(n)

Fast; subtle to explain correctly



6. Pseudocode

O(n²) DP:

if nums is empty: return 0

dp = array of size n, all initialized to 1

for i from 1 to n-1:
    for j from 0 to i-1:
        if nums[j] < nums[i]:
            dp[i] = max(dp[i], dp[j] + 1)

return max(dp)

O(n log n) tails:

tails = empty list

for each num in nums:
    pos = leftmost index in tails where tails[pos] >= num  (binary search)
    if pos == tails.length:
        append num to tails
    else:
        tails[pos] = num

return tails.length

7. Edge Cases

  • Empty array: Return 0 immediately.

  • Single element: dp[0] = 1. Inner loop never runs. max(dp) = 1. Correct.

  • Strictly decreasing: [5, 4, 3, 2, 1]. No j < i satisfies nums[j] < nums[i]. Every dp[i] stays 1. Answer: 1.

  • All equal: [7, 7, 7, 7]. Strictly increasing means equal elements can't extend each other. Every dp[i] = 1. Answer: 1.

  • Already sorted: [1, 2, 3, 4, 5]. Each element extends the subsequence ending before it. dp = [1, 2, 3, 4, 5]. Answer: 5.


8. Full Solution

O(n²) DP - The Expected Interview Baseline

public class LongestIncreasingSubsequence {

    public static int lengthOfLIS(int[] nums) {
        if (nums == null || nums.length == 0) return 0;

        int n = nums.length;
        int[] dp = new int[n];
        Arrays.fill(dp, 1);  // every element is a subsequence of length 1

        for (int i = 1; i < n; i++) {
            for (int j = 0; j < i; j++) {
                if (nums[j] < nums[i]) {
                    dp[i] = Math.max(dp[i], dp[j] + 1);
                }
            }
        }

        int max = 0;
        for (int len : dp) max = Math.max(max, len);
        return max;
    }
}

O(n log n) - Tails Array with Binary Search

public class LongestIncreasingSubsequence {

    public static int lengthOfLIS(int[] nums) {
        if (nums == null || nums.length == 0) return 0;

        int[] tails = new int[nums.length];
        int size = 0;  // current length of tails

        for (int num : nums) {
            // Binary search for the leftmost tail >= num
            int lo = 0, hi = size;
            while (lo < hi) {
                int mid = lo + (hi - lo) / 2;
                if (tails[mid] < num) lo = mid + 1;
                else hi = mid;
            }

            tails[lo] = num;   // replace or extend

            if (lo == size) size++;  // num extends the longest subsequence so far
        }

        return size;
    }
}

9. Test Your Solution

System.out.println(lengthOfLIS(new int[]{10,9,2,5,3,7,101,18}));//4
System.out.println(lengthOfLIS(new int[]{0,1,0,3,2,3})); // 4
System.out.println(lengthOfLIS(new int[]{7,7,7,7}));     // 1
System.out.println(lengthOfLIS(new int[]{1}));           // 1
System.out.println(lengthOfLIS(new int[]{}));            // 0
System.out.println(lengthOfLIS(new int[]{5,4,3,2,1}));   // 1
System.out.println(lengthOfLIS(new int[]{1,2,3,4,5}));   // 5

The strictly decreasing and already-sorted cases are worth tracing manually. They confirm that your base case initialization (dp[i] = 1) and your final max scan are both working correctly.


10. Key Lessons for Future Problems

  • Greedy fails when early commitments block better paths. The signal is: you can make a locally optimal choice that looks good now but closes off something better later. When you notice that, slow down and think about DP.

  • "Ending at index i" is a powerful state definition. Many sequence DP problems - LIS, Longest Common Subsequence, jump game variants - become tractable when you anchor the state to a specific endpoint rather than a range or a set. Fixing the endpoint gives you something concrete to compute over.

  • Define what you know, not what you hope to know. The reason dp[i] = "LIS ending at index i" works is that we have complete information about all indices before i when we compute it. A state like "LIS starting at index i" would require knowing what comes after - which we don't when we compute it bottom-up.

  • Know the O(n log n) optimization exists, even if you implement O(n²). Being able to say "there's a faster solution using binary search on a tails array, which I can explain if needed" signals depth. You may not need need to implement it under pressure, depending on your interviewer - but understanding why it works (smaller tails preserve more future options) demonstrates genuine understanding of the problem.

  • Return max(dp), not dp[n-1]. The longest subsequence doesn't have to end at the last element. Forgetting this is a common off-by-one mistake that passes most tests but fails on inputs where the optimal subsequence ends early.


Longest Increasing Subsequence teaches patience - both in the algorithm (build on what came before, don't rush ahead) and in the problem-solving process (slow down, define state carefully, verify the recurrence before coding).


Good Luck and Happy Coding!

Recent Posts

See All
Longest Common Subsequence

Learn how to solve the Longest Common Subsequence coding problem to prepare for your next technical interview! Longest Common Subsequence is a classic test of whether you can define a two-dimensional

 
 
 
Palindrome Partitioning

Learn how to solve the Palindrome Partitioning coding problem to prepare for your next technical interview! Palindrome Partitioning is a backtracking problem that tests whether you can explore a decis

 
 
 
Sudoku Solver

Learn how to solve the Sudoku Solver coding problem to prepare for your next technical interview! Sudoku Solver looks overwhelming because the board is big and the rules feel strict. That's exactly wh

 
 
 

Comments


Drop Me a Line, Let Me Know What You Think

Thanks for submitting!

© 2023 by Train of Thoughts. Proudly created with Wix.com

bottom of page