Dynamic Programming Isn't Hard - You're Just Learning It Wrong
The Confession
For two years, every time I saw "Dynamic Programming" in a problem tag, I clicked away. I told myself I'd come back to it later. Later never came. I built an entire LeetCode practice routine around avoiding DP. I was decent at arrays, comfortable with trees, could handle most graph problems - but the moment a problem required DP, I froze.
Then I got an Amazon interview. The recruiter told me the second round would "focus on optimization problems." I knew what that meant. DP was coming, and I had eight weeks to learn something I'd been running from for two years.
Here's the thing nobody told me: Dynamic Programming isn't a single technique. It's a way of thinking that builds on skills you probably already have. If you can write a recursive function, you're 60% of the way to DP. The remaining 40% is learning where to add a cache and how to flip recursion upside down.
This guide follows the exact path I took over those eight weeks. No skipping ahead. No assuming you already understand subproblems. Each section builds on the last, like chapters in a book where you can't skip to the end.
Step 0: Recursion - If You Can't Recurse, You Can't DP
Every single DP solution starts as a recursive solution. If you can't write the recursive version first, you have zero chance of writing the DP version. This is the step most tutorials skip, and it's the reason most people think DP is hard.
The Fibonacci Starting Point
I know, Fibonacci is overused. But it's overused because it perfectly illustrates the progression from recursion to memoization to tabulation. Bear with me.
The problem: compute the nth Fibonacci number, where fib(0) = 0, fib(1) = 1, and fib(n) = fib(n-1) + fib(n-2).
The pure recursive solution:
function fib(n):
if n <= 1:
return n
return fib(n-1) + fib(n-2)
Simple. Elegant. And catastrophically slow.
Understanding Why Pure Recursion Explodes
Call fib(5). Here's the call tree:
fib(5)
/ \
fib(4) fib(3)
/ \ / \
fib(3) fib(2) fib(2) fib(1)
/ \ / \ / \
fib(2) fib(1) fib(1) fib(0) fib(1) fib(0)
/ \
fib(1) fib(0)
Count the nodes: 15 function calls to compute fib(5). Look at all the redundancy - fib(3) is computed twice, fib(2) is computed three times, fib(1) is computed five times.
For fib(n), the number of calls grows exponentially - approximately 2^n. Computing fib(50) would require over a quadrillion function calls. Your computer would run for years.
The key observation: we're solving the same subproblems over and over. This is the exact condition that makes DP applicable. Not every problem has overlapping subproblems. But when it does, DP transforms exponential time into polynomial time.
The Two Properties That Make DP Possible
Before throwing DP at a problem, verify two properties:
-
Optimal Substructure: The optimal solution to the problem can be constructed from optimal solutions to its subproblems. For Fibonacci,
fib(n)is literally defined as the sum of two smaller Fibonacci numbers. -
Overlapping Subproblems: The same subproblems are solved multiple times. In the call tree above,
fib(3)is computed from scratch twice when we only need to compute it once.
If a problem has both properties, DP applies. If it has optimal substructure but not overlapping subproblems (like merge sort), you use divide-and-conquer instead. The distinction matters in interviews.
Memoization: Adding a Cache to Your Recursion
Memoization is the simplest form of DP, and it's where I recommend starting. The idea is embarrassingly straightforward: if you've computed the answer to a subproblem before, don't compute it again. Store it in a cache and look it up.
Fibonacci with Memoization
memo = {}
function fib(n):
if n in memo:
return memo[n]
if n <= 1:
return n
result = fib(n-1) + fib(n-2)
memo[n] = result
return result
That's it. Three lines changed. The recursive structure is identical. We just added a dictionary that stores results we've already computed.
What Changes in the Call Tree
With memoization, the call tree for fib(5) becomes:
fib(5) -> needs fib(4) and fib(3)
fib(4) -> needs fib(3) and fib(2)
fib(3) -> needs fib(2) and fib(1)
fib(2) -> needs fib(1) and fib(0)
fib(1) -> returns 1 (base case)
fib(0) -> returns 0 (base case)
fib(2) = 1, stored in memo
fib(1) -> returns 1 (base case)
fib(3) = 2, stored in memo
fib(2) -> found in memo! Returns 1 immediately.
fib(4) = 3, stored in memo
fib(3) -> found in memo! Returns 2 immediately.
fib(5) = 5
Instead of 15 calls, we make 9, and more importantly, no subproblem is fully computed more than once. Each of the n+1 subproblems (fib(0) through fib(n)) is computed exactly once and then looked up in O(1) time for any subsequent calls.
Time complexity: O(n) - each subproblem computed once. Space complexity: O(n) - the memo dictionary stores n entries, plus O(n) for the recursion call stack.
From exponential to linear. That's the power of memoization.
The Memoization Recipe
This works for any recursive solution with overlapping subproblems:
- Write the pure recursive solution first. Make sure it's correct.
- Add a cache (dictionary/hash map) as a parameter or global variable.
- At the start of the function, check if the current input is in the cache. If yes, return the cached value.
- Before returning a computed result, store it in the cache.
That's the entire technique. Four steps. Apply them mechanically and your exponential recursive solution becomes polynomial.
Tabulation: Flipping It Bottom-Up
Memoization is top-down: you start with the big problem and recursively break it into smaller pieces, caching along the way. Tabulation is bottom-up: you start with the smallest subproblems and build up to the answer.
Fibonacci with Tabulation
function fib(n):
if n <= 1:
return n
table = array of size n+1
table[0] = 0
table[1] = 1
for i from 2 to n:
table[i] = table[i-1] + table[i-2]
return table[n]
No recursion. No call stack. Just a loop filling in a table from left to right. Each entry depends only on the two previous entries, which are already computed by the time we need them.
Time complexity: O(n) Space complexity: O(n) for the table - but we can optimize to O(1) since we only ever need the last two values:
function fib(n):
if n <= 1:
return n
prev2 = 0
prev1 = 1
for i from 2 to n:
current = prev1 + prev2
prev2 = prev1
prev1 = current
return prev1
O(n) time, O(1) space. This space optimization is a common follow-up in interviews and it's only possible with the bottom-up approach (you can't easily eliminate the call stack in the recursive version).
When to Choose Memoization vs Tabulation
Memoization (top-down) is better when:
- The recursive structure is natural and easy to write
- You don't need all subproblems (some branches of the recursion tree might never be visited)
- You want a quick transformation from a working recursive solution
Tabulation (bottom-up) is better when:
- You need to optimize space (rolling array / keeping only the last few rows)
- All subproblems will be computed anyway
- You want to avoid recursion stack overflow for very large inputs
- The interviewer asks for an iterative solution
In interviews, I recommend writing memoization first (it's harder to get wrong) and then converting to tabulation if the interviewer asks for optimization.
The 4 DP Archetypes
Here's the framework that made DP click for me. Almost every DP problem in interviews falls into one of four archetypes. Learn to recognize which archetype a problem belongs to, and you'll know the general shape of the solution before writing a single line of code.
Archetype 1: Fibonacci-Type (Linear Sequence)
Pattern: The answer for position i depends on a fixed number of previous positions (usually i-1 and i-2, sometimes i-3).
Recurrence shape: dp[i] = f(dp[i-1], dp[i-2], ...)
Classic problems:
- Climbing Stairs (LeetCode 70): You can climb 1 or 2 steps at a time. How many distinct ways to reach step
n? The answer isdp[n] = dp[n-1] + dp[n-2]- literally Fibonacci. You can reach stepneither from stepn-1(one step) or from stepn-2(two steps). - House Robber (LeetCode 198): Rob houses in a line, but you can't rob two adjacent houses. For each house, you either rob it (and add its value to the best result from two houses back) or skip it (and take the best result from one house back). Recurrence:
dp[i] = max(dp[i-1], dp[i-2] + nums[i]). - Decode Ways (LeetCode 91): Count the number of ways to decode a digit string into letters. Each position can either be decoded as a single digit (if valid) or as a two-digit number with the previous digit (if valid). This gives a Fibonacci-like recurrence with validity conditions.
How to identify: The problem involves a linear sequence (array, string, staircase) and the decision at each step depends on a small, fixed number of previous steps.
Archetype 2: Knapsack-Type (Subset Selection)
Pattern: Given a set of items with weights and values, select a subset that maximizes value without exceeding a capacity constraint. The state requires tracking both the item index and the remaining capacity.
Recurrence shape: dp[i][w] = max(dp[i-1][w], dp[i-1][w - weight[i]] + value[i])
This says: for item i with remaining capacity w, either skip the item (take the best value from the first i-1 items with the same capacity) or include it (take its value plus the best value from the first i-1 items with reduced capacity).
Classic problems:
- 0/1 Knapsack: The pure form. Each item can be taken or left. This is the foundation.
- Partition Equal Subset Sum (LeetCode 416): Can you partition an array into two subsets with equal sum? This is secretly a knapsack problem - you're asking if there's a subset that sums to exactly half the total. The "capacity" is
totalSum / 2, and each number is an "item" with weight equal to its value. - Coin Change (LeetCode 322): Find the minimum number of coins to make an amount. This is an unbounded knapsack variant - you can use each coin denomination unlimited times. Recurrence:
dp[amount] = min(dp[amount - coin] + 1)for each coin denomination. - Target Sum (LeetCode 494): Assign + or - to each number to reach a target sum. This transforms into a subset sum problem: find a subset with sum equal to
(totalSum + target) / 2.
How to identify: The problem involves choosing from a set of items/options with some constraint on what combinations are valid. Keywords: "subset," "partition," "capacity," "budget," "knapsack" (obviously), or any variant of "select items to maximize/minimize something under a constraint."
Archetype 3: LCS-Type (Two-Sequence Alignment)
Pattern: Given two sequences, find the optimal way to align, match, or transform one into the other. The state tracks positions in both sequences simultaneously.
Recurrence shape: dp[i][j] = f(dp[i-1][j], dp[i][j-1], dp[i-1][j-1])
This represents three choices at each step: advance in the first sequence only, advance in the second sequence only, or advance in both.
Classic problems:
-
Longest Common Subsequence (LeetCode 1143): Find the length of the longest subsequence present in both strings. If characters match,
dp[i][j] = dp[i-1][j-1] + 1. If they don't,dp[i][j] = max(dp[i-1][j], dp[i][j-1]).Walkthrough with "ABCDE" and "ACE":
"" A C E "" 0 0 0 0 A 0 1 1 1 B 0 1 1 1 C 0 1 2 2 D 0 1 2 2 E 0 1 2 3The LCS is "ACE" with length 3.
-
Edit Distance (LeetCode 72): Find the minimum number of insertions, deletions, and substitutions to transform one string into another. Three operations map to three transitions: insert (advance in second string), delete (advance in first string), replace (advance in both). If characters match, no operation needed.
-
Longest Common Substring: Similar to LCS but requires contiguous matching. The recurrence resets to 0 when characters don't match instead of taking the max of neighbors.
How to identify: The problem involves two sequences (strings, arrays) and asks about their relationship - matching, alignment, transformation distance, or common structure.
Archetype 4: Grid-Type (2D Path)
Pattern: Navigate a grid from one corner to another, counting paths, minimizing cost, or finding optimal routes. The state is the grid position.
Recurrence shape: dp[i][j] = f(dp[i-1][j], dp[i][j-1]) - you can arrive at cell (i,j) from above or from the left.
Classic problems:
- Unique Paths (LeetCode 62): Count the number of paths from top-left to bottom-right in a grid, moving only right or down.
dp[i][j] = dp[i-1][j] + dp[i][j-1]. The first row and first column are all 1s (only one way to reach any cell in the first row or column). - Minimum Path Sum (LeetCode 64): Find the path with the minimum sum.
dp[i][j] = grid[i][j] + min(dp[i-1][j], dp[i][j-1]). - Unique Paths II (LeetCode 63): Same as Unique Paths but with obstacles. If a cell is an obstacle,
dp[i][j] = 0. Otherwise, same recurrence. - Maximal Square (LeetCode 221): Find the largest square of 1s in a binary matrix.
dp[i][j] = min(dp[i-1][j], dp[i][j-1], dp[i-1][j-1]) + 1if the cell is 1. This is a non-obvious grid DP that stumps many candidates.
How to identify: The problem explicitly involves a grid, matrix, or 2D structure. Or it can be modeled as one - for example, edit distance is secretly a grid DP where one string runs along the rows and the other along the columns.
From Zero to Amazon Offer: The 8-Week DP Drill
Here's the exact study plan I followed. I solved two to three problems per week, spending time on understanding rather than speed. The goal isn't to solve 50 DP problems - it's to deeply understand the four archetypes so you can recognize them instantly.
Weeks 1-2: Fibonacci Archetype
Week 1:
- Climbing Stairs (LeetCode 70) - Write it recursively first. Add memoization. Convert to tabulation. Optimize to O(1) space. This single problem teaches the entire memoization-to-tabulation pipeline.
- House Robber (LeetCode 198) - Same process. The "skip or take" decision pattern appears in dozens of DP problems.
Week 2:
- House Robber II (LeetCode 213) - Circular variant. Run House Robber twice: once excluding the first house, once excluding the last house. Take the max. Teaches you how to handle circular dependencies.
- Decode Ways (LeetCode 91) - Fibonacci-like but with validity conditions. Teaches you that not all transitions are always valid.
Weeks 3-4: Knapsack Archetype
Week 3:
- Implement the classic 0/1 Knapsack from scratch. Not on LeetCode, but essential. Write the 2D table, understand why each cell depends on the cell above and the cell diagonally above-left. Then optimize to a 1D array (iterate capacity in reverse to avoid using updated values).
- Partition Equal Subset Sum (LeetCode 416) - Your first "disguised knapsack" problem. The insight that this is a subset sum problem is the hardest part.
Week 4:
- Coin Change (LeetCode 322) - Unbounded knapsack variant. Each coin can be used multiple times, which changes the iteration direction compared to 0/1 knapsack.
- Target Sum (LeetCode 494) - Requires a mathematical transformation to convert the problem into subset sum. Teaches you that the hardest part of DP is often defining the state, not writing the recurrence.
Weeks 5-6: LCS Archetype
Week 5:
- Longest Common Subsequence (LeetCode 1143) - Build the full 2D table by hand for small examples. Trace back through the table to reconstruct the actual subsequence, not just its length.
- Edit Distance (LeetCode 72) - The "flagship" DP problem. If you can solve this cold, you can handle most DP interview questions. Practice explaining each of the three operations and how they map to table transitions.
Week 6:
- Longest Palindromic Subsequence (LeetCode 516) - This is LCS of a string with its reverse. Recognizing this transformation is a key skill.
- Longest Increasing Subsequence (LeetCode 300) - Technically a single-sequence DP, but the O(n^2) solution uses the LCS-like comparison pattern. The O(n log n) patience sorting solution is a bonus if you have time.
Weeks 7-8: Grid Archetype and Mixed Practice
Week 7:
- Unique Paths (LeetCode 62) - Straightforward grid DP. Focus on understanding the base cases (first row, first column) and how the recurrence propagates.
- Minimum Path Sum (LeetCode 64) - Same grid structure but with optimization instead of counting.
- Maximal Square (LeetCode 221) - The recurrence
dp[i][j] = min(top, left, diagonal) + 1is non-obvious. Work through several examples to build intuition for why it works.
Week 8:
- Longest Palindromic Substring (LeetCode 5) - Can be solved with DP (expand around center is technically two pointers, but the DP approach fills a 2D boolean table).
- Word Break (LeetCode 139) - A 1D DP where for each position you check all possible word endings. Combines string processing with DP.
- One problem from each of the previous archetypes, chosen randomly. Time yourself. This simulates interview conditions.
Why This Plan Works
Notice the structure: two weeks per archetype, two problems per week. Each pair contains one "pure" problem (directly matches the archetype) and one "disguised" problem (requires recognizing the archetype underneath a different surface).
By week 8, you won't have solved 100 DP problems. You'll have solved 16. But you'll understand them deeply enough to recognize the archetype in a problem you've never seen before. That recognition is worth more than having memorized 100 solutions you don't understand.
Common DP Traps Interviewers Set
After eight weeks of study and dozens of mock interviews, I started noticing patterns in how interviewers test DP knowledge. Here are the traps I've seen and how to avoid them.
Trap 1: Wrong State Definition
The most common mistake - and the hardest to debug. If your state doesn't capture enough information, your recurrence will be wrong. If it captures too much, your solution will be too slow.
Example: In the Coin Change problem, a common mistake is defining the state as just the number of coins used. But you need the state to be the remaining amount, because different combinations of coins can use the same number of coins but reach different remaining amounts.
How to avoid it: Before writing the recurrence, write out in English: "dp[i] represents the answer to the subproblem where ___." Fill in that blank precisely. If you can't, your state definition is unclear.
Trap 2: Missing or Wrong Base Cases
Your recurrence might be perfect, but if the base cases are wrong, the entire table is wrong. Base cases are the "ground truth" that everything else builds on.
Example: In Climbing Stairs, what's dp[0]? Many people say 0, but it should be 1 - there's exactly one way to stand at the ground (do nothing). Getting this wrong means every subsequent value is off by a factor.
How to avoid it: Always verify your base cases by computing the first few values by hand and checking against the expected answers. For dp[0], dp[1], and dp[2], you should be able to verify correctness without the recurrence - just by reasoning about the problem directly.
Trap 3: Not Recognizing Space Optimization Opportunities
Interviewers love to ask: "Can you optimize the space?" For many DP problems, the answer is yes.
The pattern: If dp[i] only depends on dp[i-1] (and possibly dp[i-2]), you don't need the entire table. You only need the last one or two rows/values.
- Fibonacci-type: O(n) table reduces to O(1) with two variables.
- Grid-type: O(m*n) table reduces to O(n) with a single row (process row by row, updating in place).
- Knapsack-type: O(n*W) table reduces to O(W) with a single row (iterate capacity in reverse for 0/1, forward for unbounded).
- LCS-type: O(m*n) table reduces to O(min(m,n)) with two rows, though reconstructing the actual subsequence then requires additional work.
How to avoid the trap: Always mention space optimization proactively, even if the interviewer doesn't ask. Say something like: "I notice that each row only depends on the previous row, so I can reduce space from O(mn) to O(n) by using a rolling array."
Trap 4: The "Build the Recurrence in the Interview" Pressure Test
Some interviewers don't care if you've memorized the recurrence for Knapsack. They'll give you a novel problem and watch you derive the recurrence from scratch. This is the real DP skill.
The method I use:
- Define the subproblem in words. "dp[i][j] represents ___."
- Identify the choices at each step. "At step i, I can either ___ or ___."
- Write the recurrence from the choices. Each choice leads to a smaller subproblem. Combine them with max/min/sum depending on what the problem asks.
- Identify base cases. What's the smallest subproblem I can solve directly?
- Determine the iteration order. Which subproblems need to be solved first?
I've used this five-step method in every DP interview since, and it's never failed me. Even if I don't get the optimal solution immediately, walking the interviewer through my reasoning shows them I have a systematic approach, which matters more than getting the answer instantly.
Trap 5: Confusing Memoization and Tabulation Complexity
Some candidates say "my DP solution is O(n)" without realizing that their 2D memoization table is actually O(n*m). Count your state variables. If you have two state variables each ranging up to n, your time complexity is O(n^2), not O(n). The number of unique subproblems equals the product of the ranges of all state variables.
My Amazon Interview
Eight weeks after I started studying DP, I walked into my Amazon interview. The second round opened with this problem:
"Given a string and a dictionary of words, determine if the string can be segmented into a space-separated sequence of dictionary words."
Word Break. I recognized it in about ten seconds.
I talked through my thought process: "This is a 1D DP problem. I'll define dp[i] as true if the substring from position 0 to position i can be segmented into dictionary words. For each position i, I'll check all possible last words - if there exists some j where dp[j] is true and the substring from j to i is in the dictionary, then dp[i] is true. The base case is dp[0] = true - the empty string can trivially be segmented."
I coded it in eight minutes. Clean, correct, no bugs. The interviewer asked about time complexity: O(n^2 * k) where n is the string length and k is the average word length for substring comparison. She asked if I could optimize - I mentioned using a trie for the dictionary to reduce the substring comparisons.
Then the follow-up: "Now return all possible segmentations." I knew this was the backtracking variant - Word Break II. I explained that I'd use the same DP table to first verify a solution exists (pruning), then use recursion with the DP table to enumerate all valid segmentations. I coded it in another fifteen minutes.
Two weeks later, I got the offer.
The eight weeks I spent on DP weren't about memorizing solutions. They were about building a mental framework - the four archetypes, the five-step method for deriving recurrences, the instinct for recognizing which archetype a new problem belongs to. When I saw "string segmentation with a dictionary," my brain immediately mapped it to the Fibonacci archetype (1D DP, each position depends on previous positions) with a knapsack twist (choosing from a set of items/words).
That mapping is the skill. Not genius. Not mathematical brilliance. Pattern recognition, built through deliberate practice over eight focused weeks.
DP isn't hard. You were just learning it wrong. Start with recursion. Add a cache. Build the table. Learn the four archetypes. And stop skipping DP problems.
FAQ
What's the difference between memoization and tabulation?
Memoization is top-down: you write a recursive function and cache results so repeated calls return instantly. Tabulation is bottom-up: you fill a table iteratively from the smallest subproblems up to the final answer, with no recursion. Both achieve the same time complexity. Memoization is easier to write but uses call stack space. Tabulation avoids recursion and often allows space optimization since you can discard table entries you no longer need.
How do I identify a DP problem in an interview?
Look for two signals. First, the problem asks for an optimal value (minimum cost, maximum profit, longest subsequence, number of ways) rather than the actual optimal solution itself. Second, the problem has overlapping subproblems - solving it recursively would compute the same sub-answers multiple times. Common keywords that suggest DP: "minimum number of," "maximum profit," "count the number of ways," "is it possible to," "longest/shortest that satisfies." Also watch for constraint sizes: if n is up to 1000 and the expected complexity seems to be O(n^2), that's likely a 2D DP. If n is up to 10,000 and the expected complexity is O(n), that's likely 1D DP. These constraint hints are surprisingly reliable for LeetCode and interview problems.
Is dynamic programming just recursion with caching?
Memoization is exactly recursion with caching. But DP also includes tabulation (bottom-up iteration), which uses no recursion at all. A more complete answer for interviews: "DP solves problems with optimal substructure and overlapping subproblems, implemented either top-down with memoization or bottom-up with tabulation." This distinction matters because tabulation is often easier to space-optimize.
What are the most common DP problems asked at FAANG?
The most frequently asked DP problems are: Coin Change (Amazon, Google), Longest Increasing Subsequence (Google, Microsoft), Word Break (Amazon, Meta), Edit Distance (Google), House Robber (Amazon), Unique Paths (Microsoft, Amazon), Longest Common Subsequence (Google), Partition Equal Subset Sum (Meta), Climbing Stairs (warm-up at many companies), and Maximal Square (Apple, Google). These map directly to the four archetypes: Climbing Stairs and House Robber are Fibonacci-type; Coin Change and Partition Equal Subset Sum are Knapsack-type; Edit Distance and LCS are LCS-type; Unique Paths and Maximal Square are Grid-type. Master these ten and you'll have a solid foundation for any DP interview question.
Ready to practice DP problems with an AI mentor? Explore coding interview patterns on Levelop or learn the binary search on answer pattern - another technique interviewers love to test. If you're working through a structured FAANG prep timeline, the 12-week FAANG interview prep plan shows exactly how to fit DP study into your overall schedule.