Back to Blog

The 5 Patterns That Solve 80% of LeetCode Problems (No, Seriously)

The Grind Trap: Why Solving 500 Problems Won't Help

I want to tell you about the worst six months of my career preparation.

It was mid-2024, and I had decided to get serious about interview prep. I opened LeetCode, sorted problems by acceptance rate, and started grinding. Problem after problem, day after day. I kept a spreadsheet. I tracked my numbers like a runner tracks miles. By month four, I had solved 300 problems. By month six, I had crossed 400.

Then I walked into my first Google phone screen, saw a problem I had never encountered before, and froze. Completely. I stared at the screen for 40 minutes, tried three different approaches, and none of them worked. I did not get a callback.

Here is the thing that hurt the most: the problem was a straightforward sliding window question. I had solved at least 20 sliding window problems on LeetCode. But I had solved them by reading hints, looking at discussions, and pattern-matching to specific problems I had seen before. I had never actually internalized the pattern itself. I could solve "Minimum Window Substring" because I had memorized the solution to "Minimum Window Substring." I could not solve a novel problem that used the same underlying technique.

This is what I call the Grind Trap. You confuse volume with understanding. You feel productive because the number in your spreadsheet goes up. But you are building a library of memorized solutions rather than developing the pattern recognition skills that actually matter in interviews.

The research on skill acquisition backs this up. Cognitive scientists distinguish between recognition (seeing something familiar and recalling the answer) and transfer (applying a principle to a new, unfamiliar situation). Grinding LeetCode without deliberate pattern extraction builds recognition. Interviews test transfer.

After that failed Google screen, I changed my approach completely. Instead of solving problems, I studied patterns. I grouped problems not by difficulty or topic tag but by the underlying algorithmic technique. I focused on understanding why a pattern works and when to apply it, not just how to code it.

Within two months, I was solving problems I had never seen before in 15-20 minutes. Here are the five patterns that made the difference.

Pattern 1: Two Pointers - The Swiss Army Knife

The problem that made this click for me was Container With Most Water (LeetCode 11). I had seen solutions before, but I had never understood why the two-pointer approach was correct - just that it worked.

Two pointers is the simplest pattern on this list, but it is also the most versatile. The core idea: instead of using nested loops (O(n^2)) to examine all pairs of elements, you use two pointers that move through the data in a coordinated way, reducing the search space with each step.

There are three main flavors:

Converging Pointers (Opposite Ends)

Place one pointer at the start and one at the end. Move them toward each other based on some condition. This works when the data is sorted (or has some monotonic property) and you are searching for a pair that satisfies a condition.

Classic problem: Two Sum II (sorted array). Given a sorted array and a target sum, find two numbers that add up to the target.

def two_sum_sorted(nums, target):
    left, right = 0, len(nums) - 1
    while left < right:
        current_sum = nums[left] + nums[right]
        if current_sum == target:
            return [left, right]
        elif current_sum < target:
            left += 1      # need a bigger sum, move left pointer right
        else:
            right -= 1     # need a smaller sum, move right pointer left
    return []

Why does this work? Because the array is sorted, moving the left pointer right increases the sum, and moving the right pointer left decreases it. At each step, you are eliminating an entire row or column of the implicit pair matrix. You never need to revisit a pair.

Container With Most Water uses the same structure. Two pointers start at opposite ends. The area is min(height[left], height[right]) * (right - left). You always move the pointer pointing to the shorter line inward, because the only way to potentially find a larger area is to find a taller line (moving the taller pointer inward would definitely decrease width without any guarantee of increasing height).

Fast and Slow Pointers

Two pointers starting at the same position but moving at different speeds. This is the go-to technique for cycle detection in linked lists, finding the middle of a linked list, and certain array problems.

Classic problem: Linked List Cycle (LeetCode 141). Determine if a linked list has a cycle.

def has_cycle(head):
    slow, fast = head, head
    while fast and fast.next:
        slow = slow.next          # moves 1 step
        fast = fast.next.next     # moves 2 steps
        if slow == fast:
            return True           # they met  - cycle exists
    return False                  # fast reached the end  - no cycle

The math behind this is elegant. If there is a cycle of length C, and the slow pointer enters the cycle after traveling D nodes, the fast pointer is already somewhere in the cycle (since it moves twice as fast). Once both are in the cycle, the fast pointer closes the gap by 1 node per iteration (it moves 2, slow moves 1, net gain is 1). So they are guaranteed to meet within C iterations after slow enters the cycle.

Finding the cycle start (LeetCode 142) extends this: after fast and slow meet, reset one pointer to the head and move both at speed 1. They will meet at the cycle's start. This works because of the mathematical relationship: the distance from the head to the cycle start equals the distance from the meeting point to the cycle start (traveling around the cycle).

Same-Direction Pointers (Partitioning)

Both pointers move in the same direction, but one moves ahead while the other marks a boundary. This is used for in-place array modifications.

Classic problem: Remove Duplicates from Sorted Array (LeetCode 26).

def remove_duplicates(nums):
    if not nums:
        return 0
    write = 1  # slow pointer: next position to write
    for read in range(1, len(nums)):  # fast pointer: scanning ahead
        if nums[read] != nums[read - 1]:
            nums[write] = nums[read]
            write += 1
    return write

The write pointer marks the boundary of the "processed" region. The read pointer scans ahead looking for new elements. This same structure applies to Remove Element, Move Zeroes, and Sort Colors (Dutch National Flag).

How to recognize a two-pointer problem in an interview: Look for sorted arrays or linked lists where you need to find pairs, detect cycles, or partition elements in-place. If the brute force is O(n^2) with nested loops examining pairs, two pointers can likely reduce it to O(n).

Pattern 2: Sliding Window - The Caterpillar Technique

I call this the caterpillar technique because the window moves like a caterpillar - the front end extends forward, then the back end catches up. This is the pattern I failed to apply in my Google screen, and the one I have since used more than any other.

Sliding window solves problems where you need to find a contiguous subarray or substring that satisfies some condition. There are two variants.

Fixed-Size Window

When the window size is given, you maintain a window of exactly that size and slide it across the array.

Classic problem: Maximum Sum Subarray of Size K.

def max_sum_subarray(nums, k):
    window_sum = sum(nums[:k])  # sum of first window
    max_sum = window_sum

    for i in range(k, len(nums)):
        window_sum += nums[i]       # add new element entering window
        window_sum -= nums[i - k]   # remove element leaving window
        max_sum = max(max_sum, window_sum)

    return max_sum

The key insight: instead of recalculating the sum for each window from scratch (O(k) per window, O(nk) total), you maintain a running sum and update it incrementally as the window slides (O(1) per step, O(n) total).

Variable-Size Window

This is where the pattern gets powerful. When you need to find the smallest or largest window that satisfies a condition, you use an expanding/shrinking window.

Classic problem: Minimum Window Substring (LeetCode 76). Given strings s and t, find the minimum window in s that contains all characters of t.

def min_window(s, t):
    need = Counter(t)         # characters we need
    missing = len(t)          # total characters still needed
    left = 0
    best_start, best_len = 0, float('inf')

    for right in range(len(s)):
        # Expand: add s[right] to the window
        if need[s[right]] > 0:
            missing -= 1
        need[s[right]] -= 1

        # Shrink: while window satisfies the condition
        while missing == 0:
            # Update best answer
            window_len = right - left + 1
            if window_len < best_len:
                best_len = window_len
                best_start = left

            # Remove s[left] from the window
            need[s[left]] += 1
            if need[s[left]] > 0:
                missing += 1
            left += 1

    return "" if best_len == float('inf') else s[best_start:best_start + best_len]

The variable-size sliding window follows a universal template:

  1. Expand the window by moving right forward and updating the window state.
  2. Check if the window satisfies the condition.
  3. Shrink the window by moving left forward (while the condition is still satisfied, for minimum-length problems).
  4. Update the answer at the appropriate point.

This same template solves Longest Substring Without Repeating Characters (LeetCode 3), Longest Repeating Character Replacement (LeetCode 424), Permutation in String (LeetCode 567), and at least a dozen other medium/hard problems. Once you internalize the template, the only thing that changes between problems is the condition you are checking and the data structure you use to maintain window state.

The expand/shrink decision: For "find the minimum window that satisfies X" problems, you expand until you satisfy the condition, then shrink while you still satisfy it. For "find the maximum window that satisfies X" problems, you expand while you satisfy the condition, then shrink when you violate it.

How to recognize a sliding window problem: The problem involves a contiguous subarray or substring, and it asks for something optimal (minimum, maximum, longest, shortest) with some constraint. If you see "contiguous" and "optimal" together, think sliding window.

Pattern 3: BFS/DFS - Think in Layers, Not Lines

Most candidates learn BFS and DFS as graph traversal algorithms and think of them as niche tools for graph problems. This is a mistake. BFS and DFS are general-purpose exploration patterns that apply to trees, matrices, state spaces, and even string manipulation problems.

When to Use BFS vs DFS

BFS (Breadth-First Search) explores level by level, like ripples spreading from a stone dropped in water. Use it when:

  • You need the shortest path in an unweighted graph or grid.
  • You need to process nodes level by level (e.g., level-order traversal of a tree).
  • You need to find the minimum number of steps/transformations to reach a target state.

DFS (Depth-First Search) explores as deep as possible before backtracking. Use it when:

  • You need to explore all possible paths (e.g., all permutations, all combinations).
  • You need to detect cycles in a graph.
  • You are working with trees and need to compute properties that depend on subtrees (height, diameter, path sums).
  • The problem involves backtracking (place queens, solve sudoku).

BFS Level-Order Pattern

The level-order BFS pattern is one of the most versatile tools in your arsenal:

from collections import deque

def bfs_level_order(root):
    if not root:
        return []
    result = []
    queue = deque([root])

    while queue:
        level_size = len(queue)  # THIS LINE IS KEY
        level = []
        for _ in range(level_size):
            node = queue.popleft()
            level.append(node.val)
            if node.left:
                queue.append(node.left)
            if node.right:
                queue.append(node.right)
        result.append(level)

    return result

The critical technique is capturing level_size = len(queue) at the start of each level. This tells you exactly how many nodes belong to the current level, letting you process them as a batch before moving to the next level.

This same pattern solves: Binary Tree Level Order Traversal, Binary Tree Zigzag Level Order Traversal, Minimum Depth of Binary Tree, Rotting Oranges (grid BFS), Word Ladder, and Shortest Path in Binary Matrix.

Rotting Oranges (LeetCode 994) is a perfect example of BFS on a grid. You start by adding all rotten oranges to the queue (multi-source BFS). Each "level" represents one minute of elapsed time. At each level, you process all currently rotten oranges and rot their fresh neighbors. The number of levels processed is the answer.

def oranges_rotting(grid):
    rows, cols = len(grid), len(grid[0])
    queue = deque()
    fresh = 0

    for r in range(rows):
        for c in range(cols):
            if grid[r][c] == 2:
                queue.append((r, c))
            elif grid[r][c] == 1:
                fresh += 1

    minutes = 0
    while queue and fresh > 0:
        minutes += 1
        for _ in range(len(queue)):
            r, c = queue.popleft()
            for dr, dc in [(0,1),(0,-1),(1,0),(-1,0)]:
                nr, nc = r + dr, c + dc
                if 0 <= nr < rows and 0 <= nc < cols and grid[nr][nc] == 1:
                    grid[nr][nc] = 2
                    fresh -= 1
                    queue.append((nr, nc))

    return minutes if fresh == 0 else -1

DFS for Tree Problems

For tree problems, DFS is almost always the right choice. The recursive structure of trees maps naturally to recursive DFS. Most tree DFS problems follow one of two patterns:

Top-down DFS (passing information down): You pass accumulated state as parameters.

def max_depth(node, current_depth=0):
    if not node:
        return current_depth
    return max(
        max_depth(node.left, current_depth + 1),
        max_depth(node.right, current_depth + 1)
    )

Bottom-up DFS (returning information up): You compute results from leaf nodes upward.

def diameter(root):
    best = 0
    def height(node):
        nonlocal best
        if not node:
            return 0
        left_h = height(node.left)
        right_h = height(node.right)
        best = max(best, left_h + right_h)  # diameter through this node
        return 1 + max(left_h, right_h)
    height(root)
    return best

How to recognize a BFS/DFS problem: Any problem involving trees, graphs, grids, or state-space exploration. If the problem says "shortest" or "minimum steps," lean toward BFS. If it says "all paths," "all combinations," or involves trees, lean toward DFS.

Pattern 4: Binary Search on Answer - The 2026 FAANG Favorite

This is the pattern that separates strong candidates from everyone else. Most people think of binary search as "searching for an element in a sorted array." That is the most basic application. The real power of binary search is searching over the answer space - and this pattern has become extremely popular in FAANG interviews over the past two years.

The core idea: instead of searching for the answer directly, you guess an answer and then verify whether that guess is feasible. If the verification function is monotonic (once it becomes true/false, it stays that way), you can binary search over the space of possible answers.

The Template

def binary_search_on_answer(lo, hi, is_feasible):
    while lo < hi:
        mid = (lo + hi) // 2
        if is_feasible(mid):
            hi = mid      # mid works, try smaller (for minimization)
        else:
            lo = mid + 1  # mid doesn't work, need bigger
    return lo

For maximization problems, flip the logic:

def binary_search_on_answer_max(lo, hi, is_feasible):
    while lo < hi:
        mid = (lo + hi + 1) // 2   # note: ceiling division to avoid infinite loop
        if is_feasible(mid):
            lo = mid      # mid works, try bigger
        else:
            hi = mid - 1  # mid doesn't work, need smaller
    return lo

Classic Problem: Koko Eating Bananas (LeetCode 875)

Koko has piles of bananas. She can eat at speed k bananas per hour (one pile per hour, even if she finishes early). She has h hours. Find the minimum k such that she can eat all bananas within h hours.

Brute force: try k=1, k=2, k=3... until you find one that works. This could be O(max_pile * n).

Binary search on answer: the answer space is [1, max(piles)]. For a given speed k, you can compute in O(n) whether Koko can finish in time. The feasibility function is monotonic - if she can finish at speed k, she can definitely finish at speed k+1.

import math

def min_eating_speed(piles, h):
    def can_finish(k):
        hours_needed = sum(math.ceil(pile / k) for pile in piles)
        return hours_needed <= h

    lo, hi = 1, max(piles)
    while lo < hi:
        mid = (lo + hi) // 2
        if can_finish(mid):
            hi = mid
        else:
            lo = mid + 1
    return lo

Time complexity: O(n log M) where M is the max pile size. Much better than O(n * M) brute force.

Classic Problem: Split Array Largest Sum (LeetCode 410)

Given an array of non-negative integers and an integer m, split the array into m contiguous subarrays such that the maximum subarray sum is minimized.

This looks like a hard DP problem. But with binary search on answer, it becomes clean:

  • Answer space: The minimum possible answer is max(nums) (each element must be in some subarray). The maximum possible answer is sum(nums) (one subarray containing everything).
  • Feasibility check: Given a maximum sum target, can you split the array into at most m subarrays where each subarray's sum is at most target? This is a greedy check - scan left to right, starting a new subarray whenever adding the next element would exceed target.
def split_array(nums, m):
    def can_split(max_sum):
        partitions = 1
        current_sum = 0
        for num in nums:
            if current_sum + num > max_sum:
                partitions += 1
                current_sum = num
                if partitions > m:
                    return False
            else:
                current_sum += num
        return True

    lo, hi = max(nums), sum(nums)
    while lo < hi:
        mid = (lo + hi) // 2
        if can_split(mid):
            hi = mid
        else:
            lo = mid + 1
    return lo

The pattern recognition cue: When a problem asks you to minimize a maximum or maximize a minimum, think binary search on answer. When the brute force would try every possible value for the answer, and you can write a feasibility check that is monotonic, binary search on answer almost certainly applies.

Other problems that use this pattern: Capacity To Ship Packages Within D Days (LeetCode 1011), Magnetic Force Between Two Balls (LeetCode 1552), Minimum Number of Days to Make m Bouquets (LeetCode 1482), and Minimize Max Distance to Gas Station (LeetCode 774).

Pattern 5: Dynamic Programming - Just Fancy Caching

DP has a reputation as the hardest pattern. I think that reputation is undeserved. DP is just caching the results of overlapping subproblems. The hard part is not DP itself - it is identifying the subproblem structure. Once you see the recurrence, the implementation is mechanical.

The Three-Step Pipeline

Every DP problem follows the same development pipeline:

Step 1: Write the recursive solution. Forget about efficiency. Just express the problem recursively. What is the base case? What choices do you make at each step? How do you combine results from subproblems?

Step 2: Add memoization. Identify which parameters change across recursive calls. Cache the result for each unique combination of parameters. This transforms exponential time into polynomial time with zero structural changes to your code.

Step 3: Convert to tabulation (optional). If you want to eliminate recursion overhead or if the interviewer asks, convert the top-down recursion into a bottom-up loop that fills a table. The table dimensions match the parameters of your recursive function.

Walkthrough: Coin Change (LeetCode 322)

Given coins of different denominations and a total amount, find the minimum number of coins needed.

Step 1: Recursive solution.

def coin_change_recursive(coins, amount):
    if amount == 0:
        return 0
    if amount < 0:
        return float('inf')

    best = float('inf')
    for coin in coins:
        result = coin_change_recursive(coins, amount - coin)
        best = min(best, result + 1)

    return best

This works but is exponential. It recomputes the same amounts over and over.

Step 2: Add memoization.

def coin_change_memo(coins, amount, memo={}):
    if amount == 0:
        return 0
    if amount < 0:
        return float('inf')
    if amount in memo:
        return memo[amount]

    best = float('inf')
    for coin in coins:
        result = coin_change_memo(coins, amount - coin, memo)
        best = min(best, result + 1)

    memo[amount] = best
    return best

Now each unique amount is computed only once. Time: O(amount * len(coins)). Space: O(amount).

Step 3: Tabulation.

def coin_change_tab(coins, amount):
    dp = [float('inf')] * (amount + 1)
    dp[0] = 0

    for a in range(1, amount + 1):
        for coin in coins:
            if coin <= a:
                dp[a] = min(dp[a], dp[a - coin] + 1)

    return dp[amount] if dp[amount] != float('inf') else -1

Same logic, no recursion. The table dp[a] stores the minimum coins needed for amount a.

Common DP Shapes

Once you have solved enough DP problems, you start recognizing common shapes:

1D DP (linear sequence): The subproblem depends on previous elements in a sequence. Examples: Climbing Stairs, House Robber, Coin Change, Longest Increasing Subsequence.

2D DP (two sequences or grid): The subproblem depends on positions in two sequences or a 2D grid. Examples: Longest Common Subsequence, Edit Distance, Unique Paths, Minimum Path Sum.

Knapsack DP: Choose items with weights and values to maximize value within a capacity constraint. This is a special case of 2D DP (items x capacity). Examples: 0/1 Knapsack, Partition Equal Subset Sum, Target Sum.

Interval DP: The subproblem is defined over a contiguous interval [i, j]. You try all possible split points. Examples: Burst Balloons, Matrix Chain Multiplication, Minimum Cost Tree From Leaf Values.

State machine DP: The subproblem tracks a state that transitions based on actions. Examples: Best Time to Buy and Sell Stock (series), House Robber.

For each shape, the recurrence follows a pattern. Once you recognize the shape, you can write the recurrence almost mechanically.

The DP recognition cue: The problem asks for an optimal value (minimum, maximum, number of ways) and has overlapping subproblems (the same subproblem is solved multiple times in the recursive solution). If a greedy approach does not work (you cannot make locally optimal choices), DP is likely the answer.

The Pattern Recognition Drill

Knowing these five patterns is not enough. You need to train your brain to recognize which pattern applies to a new, unseen problem. Here is the drill I used:

The 3-Minute Rule

When you see a new problem, spend the first 3 minutes (no code, no paper) asking yourself:

  1. What does the brute force look like? Nested loops scanning pairs? Trying all subsets? Exploring all paths?
  2. What is the structure of the input? Sorted array? Graph/tree? String? Grid?
  3. What is the structure of the output? Single value (min/max/count)? Contiguous subarray? Path?

Then match against the patterns:

  • Sorted array + finding pairs? Two pointers.
  • Contiguous subarray/substring + optimization? Sliding window.
  • Shortest path / minimum steps / level-by-level? BFS.
  • All paths / combinations / tree properties? DFS.
  • Minimize a maximum / maximize a minimum? Binary search on answer.
  • Optimal value + overlapping subproblems? DP.

The 20-Problem Sprint

For each pattern, solve these problems in order. Do not move to the next pattern until you can solve all problems for the current one without hints:

Two Pointers (4 problems): Two Sum II, 3Sum, Container With Most Water, Trapping Rain Water.

Sliding Window (4 problems): Maximum Sum Subarray of Size K, Longest Substring Without Repeating Characters, Minimum Window Substring, Sliding Window Maximum.

BFS/DFS (4 problems): Binary Tree Level Order Traversal, Number of Islands, Word Ladder, Course Schedule.

Binary Search on Answer (4 problems): Koko Eating Bananas, Split Array Largest Sum, Capacity To Ship Packages, Minimum Number of Days to Make m Bouquets.

DP (4 problems): Climbing Stairs, Coin Change, Longest Common Subsequence, Edit Distance.

That is 20 problems total. But if you solve them with deep understanding - knowing why each pattern applies and being able to explain the decision - they are worth more than 200 problems solved by grinding.

After the Sprint

Once you have internalized the patterns, switch to random practice. Do 2-3 problems per day from mixed topics. For each problem, spend 3 minutes on pattern recognition before writing any code. If you cannot identify the pattern in 3 minutes, spend 5 more minutes thinking about it before looking at hints. Track your pattern recognition accuracy over time - it should climb above 80% within two weeks.

The goal is not to have seen every problem. The goal is to walk into an interview, see a problem you have never encountered, and within 3 minutes think: "This is a sliding window problem because it asks for the longest contiguous subarray satisfying a condition" - and then write the solution using the template you have internalized.

That is the difference between grinding and preparing. Grinding is volume. Preparing is pattern recognition. The five patterns above cover approximately 80% of the problems you will encounter in a FAANG coding interview. Master them deeply and you will be ready for almost anything.


Want to dive deeper into specific patterns? Check out our detailed guides on the two-pointer technique, dynamic programming for beginners, and the binary search on answer pattern.


Frequently Asked Questions

How many LeetCode problems should I solve?

The honest answer is that the number does not matter nearly as much as the depth of your understanding. That said, a practical target is 75 to 150 well-chosen problems solved with full understanding. The key phrase is "well-chosen" - solving 75 problems that are curated to cover the major patterns (two pointers, sliding window, BFS/DFS, binary search, DP, greedy, backtracking, union-find) will prepare you better than solving 500 random problems. Popular curated lists like Blind 75 and NeetCode 150 exist precisely for this reason. For each problem, you should be able to explain the time/space complexity, why the pattern applies, and how you would modify the solution if the constraints changed. If you can do that for 100 problems, you are ready for most FAANG interviews. If you are solving problems and cannot explain your approach without looking at your code, you are grinding, not learning.

What's the difference between two pointers and sliding window?

This is a common source of confusion because sliding window technically uses two pointers (a left and right boundary). The conceptual difference is in what the pointers represent and how they move. In the two pointers pattern, the pointers typically represent two individual elements being compared or manipulated - they converge from opposite ends, or one chases the other. The focus is on the elements at the pointer positions. In the sliding window pattern, the two pointers define the boundaries of a contiguous subarray or substring - the focus is on the window (the region between the pointers), not the individual elements at the boundaries. The window expands and contracts as you seek an optimal contiguous region. A practical rule: if the problem asks about pairs of elements (sum, comparison), it is probably two pointers. If it asks about a contiguous subarray or substring with some property (maximum length, minimum sum, contains all characters), it is probably sliding window.

Which LeetCode patterns are most common at Google?

Based on publicly available interview data and community reports, the patterns that appear most frequently at Google in 2025-2026 are: BFS/DFS and graph problems (Google loves graph traversal, shortest path, and connected components), dynamic programming (especially 2D DP, interval DP, and state machine DP), and binary search on answer (which has been trending upward at Google for the last two years). Google also asks more problems involving tries, segment trees, and advanced data structures than other FAANG companies. That said, the foundation remains the five patterns covered in this article - you will encounter two-pointer and sliding-window problems at Google as well. The difference is that Google tends to ask harder variants or combine patterns (e.g., a problem that requires both binary search and BFS). Start with the five core patterns, then add graph algorithms (Dijkstra's, topological sort, union-find) and advanced data structures (trie, segment tree, monotonic stack) for Google-specific preparation.

How long should I spend on a problem before looking at the solution?

The answer depends on where you are in your preparation. Early stage (first 30 problems): Spend 20-25 minutes. If you have not identified the approach by then, read only the hint or topic tag - not the full solution. Try again for 15 minutes with the hint. If you still cannot solve it, read the solution, understand it fully, then close it and solve it from scratch without looking. Mid stage (problems 30-75): Spend 30-35 minutes. You should be identifying patterns faster now. If you are stuck, the issue is likely in the implementation details rather than the approach. Pseudocode the approach, then try to implement it. Late stage (75+ problems): Spend 40-45 minutes, which is close to the actual interview time constraint. If you cannot solve it in 45 minutes, read the solution, but flag the problem for review in one week. The worst thing you can do is spend 2 hours on a single problem - the frustration creates negative associations and the time is better spent on two or three other problems. Equally bad is looking at the solution after 5 minutes - you never develop the struggle tolerance that interviews require. The sweet spot is spending enough time to genuinely wrestle with the problem but not so much that you are spinning your wheels unproductively.

Keep Reading

Coding Interviews

Dynamic Programming Isn't Hard - You're Just Learning It Wrong

DP isn't hard — you're just learning it wrong. From recursion to memoization to tabulation, plus the 4 archetypes that cover 90% of FAANG DP questions.

Read Article
Coding Interviews

I Failed 4 FAANG Interviews Before Two Pointers Finally Clicked

Four rejection emails. Four problems that were secretly two-pointer problems in disguise. Here's the pattern recognition that finally turned my interviews around.

Read Article
Coding Interviews

Binary Search Beyond Sorted Arrays - The Secret Pattern FAANG Loves to Test

Most people think binary search means sorted array lookup. The 'binary search on answer' pattern is the real FAANG favorite - and most candidates have never heard of it.

Read Article