Binary Search Beyond Sorted Arrays - The Secret Pattern FAANG Loves to Test
Forget Everything You Know About Binary Search
Quick: when someone says "binary search," what do you picture?
A sorted array. Two pointers. Find the target. Return the index. Textbook stuff you learned in your first algorithms course.
And that is the problem. Because the version of binary search that actually shows up in FAANG interviews - the one that separates candidates who pass from candidates who do not - looks nothing like searching for a number in a sorted array.
Here is a real interview question from a top tech company in 2025:
"Koko loves eating bananas. There are
npiles of bananas. Thei-th pile haspiles[i]bananas. Guards will return inhhours. Koko can choose some eating speedk(bananas per hour). Each hour, she picks a pile and eatskbananas from it. If the pile has fewer thankbananas, she finishes it and waits for the next hour. Find the minimumksuch that she can eat all bananas withinhhours."
Where is the sorted array? Where are the two pointers searching for a target?
There is no sorted array. There is no target to look up. And yet the optimal solution is binary search.
Welcome to binary search on the answer space - the pattern that most candidates have never heard of, and the one that interviewers at Google, Amazon, Meta, and Microsoft have increasingly relied on in 2025 and 2026. I have spoken with dozens of candidates who cleared FAANG interviews in the past year, and this pattern came up more than any other variation of binary search.
In this guide, I am going to show you exactly how it works through three progressively harder interview problems. By the end, you will not only understand the pattern - you will be able to spot it in problems where it is deliberately disguised.
The Insight: Searching the Answer Space
Here is the fundamental idea. Forget about arrays for a moment and think about this:
If someone asks you "What is the minimum speed at which Koko can eat all bananas in time?", the answer is some number. That number has a range - it cannot be less than 1 (she has to eat at least one banana per hour) and it cannot be more than the size of the largest pile (eating faster than that gains nothing). So the answer lives somewhere in the range [1, max(piles)].
Now here is the key insight: if Koko can finish at speed k, she can definitely finish at any speed greater than k. And if she cannot finish at speed k, she definitely cannot finish at any speed less than k.
This is the monotonic property. The answer space can be divided into two regions:
Speed: 1 2 3 4 5 6 7 8 9 10 11 12
Works: N N N N N N Y Y Y Y Y Y
^
This is our answer (minimum k that works)
There is a clear boundary: below some threshold, the answer does not work. At and above that threshold, it works. This is exactly the structure binary search was designed to exploit. Instead of checking every possible speed from 1 to max(piles) linearly, we can binary search for that boundary.
The general template has three components:
- Define the search space. What is the range of possible answers? What is the minimum possible answer? The maximum?
- Write a feasibility function. Given a candidate answer, can you check in O(n) or O(n log n) whether it is valid?
- Binary search for the boundary. Use binary search to find the smallest (or largest) valid answer.
This is not a hack or a trick. It is a legitimate and powerful algorithmic technique. The reason it works is that binary search does not actually require a sorted array. It requires a monotonic predicate - a function that is false for some prefix of the search space and true for the rest (or vice versa). A sorted array with a target value is just one specific instance of this. "Binary search on answer" is the general case.
Problem 1: Koko Eating Bananas (The Gateway Drug)
Let us solve this problem completely. This is LeetCode 875, and it is the problem that teaches most people this pattern for the first time.
Problem statement: There are n piles of bananas. Pile i has piles[i] bananas. Koko has h hours. She picks an eating speed k (bananas per hour). Each hour, she eats k bananas from one pile (or finishes the pile if fewer than k remain). Find the minimum integer k such that she can eat all bananas within h hours.
Step 1: Define the Search Space
The answer k (eating speed) must be at least 1. The maximum useful speed is max(piles) - if she eats at the speed of the largest pile, she finishes any pile in one hour. Going faster is wasteful.
Search space: [1, max(piles)]
Step 2: Write the Feasibility Function
Given a speed k, how many hours does Koko need? For each pile of size p, she needs ceil(p / k) hours. Sum across all piles. If the total is less than or equal to h, the speed is feasible.
import math
def can_finish(piles, k, h):
total_hours = 0
for p in piles:
total_hours += math.ceil(p / k)
return total_hours <= h
This runs in O(n) where n is the number of piles. Fast.
Step 3: Binary Search for the Boundary
We want the minimum k such that can_finish(piles, k, h) returns True.
def min_eating_speed(piles, h):
left, right = 1, max(piles)
while left < right:
mid = (left + right) // 2
if can_finish(piles, mid, h):
right = mid # mid works, but maybe something smaller also works
else:
left = mid + 1 # mid doesn't work, need to go faster
return left
Let me trace through an example. Suppose piles = [3, 6, 7, 11] and h = 8.
- Search space:
[1, 11] left = 1, right = 11, mid = 6: Hours needed = ceil(3/6) + ceil(6/6) + ceil(7/6) + ceil(11/6) = 1 + 1 + 2 + 2 = 6. 6 <= 8, soright = 6.left = 1, right = 6, mid = 3: Hours needed = ceil(3/3) + ceil(6/3) + ceil(7/3) + ceil(11/3) = 1 + 2 + 3 + 4 = 10. 10 > 8, soleft = 4.left = 4, right = 6, mid = 5: Hours needed = ceil(3/5) + ceil(6/5) + ceil(7/5) + ceil(11/5) = 1 + 2 + 2 + 3 = 8. 8 <= 8, soright = 5.left = 4, right = 5, mid = 4: Hours needed = ceil(3/4) + ceil(6/4) + ceil(7/4) + ceil(11/4) = 1 + 2 + 2 + 3 = 8. 8 <= 8, soright = 4.left = 4, right = 4: Loop exits. Answer is 4.
Total time complexity: O(n log m) where n is the number of piles and m is the maximum pile size. Compare this to the brute-force approach of checking every speed from 1 to max(piles), which would be O(n * m). The binary search on the answer space gives us a logarithmic factor instead of a linear one.
This is the gateway drug. Once you see how clean and powerful this pattern is, you start seeing it everywhere.
Problem 2: Split Array Largest Sum (The Mind-Bender)
This is LeetCode 410, and it is where the pattern starts to feel like a superpower. The problem is significantly harder, but the structure is identical.
Problem statement: Given an integer array nums and an integer k, split the array into k non-empty contiguous subarrays such that the largest sum among the subarrays is minimized. Return the minimized largest sum.
At first glance, this looks like a dynamic programming problem. And it can be solved with DP. But the binary search on answer approach is more elegant and, frankly, easier to implement under interview pressure.
Step 1: Define the Search Space
What is the range of possible answers? The answer is the "largest subarray sum" - the maximum sum among the k subarrays.
- Minimum possible answer:
max(nums). Even if you make each element its own subarray (if k is large enough), the largest sum is at least the largest individual element. You cannot split a single element. - Maximum possible answer:
sum(nums). If k = 1, the entire array is one subarray, and the largest sum equals the total sum.
Search space: [max(nums), sum(nums)]
Step 2: Write the Feasibility Function
Given a candidate answer max_sum (the maximum sum any subarray is allowed to have), can we split the array into at most k contiguous subarrays where each subarray's sum is at most max_sum?
Greedy approach: scan left to right, accumulating elements into the current subarray. When adding the next element would exceed max_sum, start a new subarray. Count how many subarrays you need.
def can_split(nums, k, max_sum):
subarrays_needed = 1
current_sum = 0
for num in nums:
if current_sum + num > max_sum:
subarrays_needed += 1
current_sum = num
if subarrays_needed > k:
return False
else:
current_sum += num
return True
This runs in O(n). The greedy strategy works because we are minimizing the largest sum - by packing as many elements as possible into each subarray (without exceeding max_sum), we minimize the number of splits needed.
Step 3: Binary Search for the Boundary
We want the minimum max_sum such that can_split returns True.
def split_array(nums, k):
left, right = max(nums), sum(nums)
while left < right:
mid = (left + right) // 2
if can_split(nums, k, mid):
right = mid # This max_sum works, try smaller
else:
left = mid + 1 # Need to allow larger subarrays
return left
Let me trace through an example: nums = [7, 2, 5, 10, 8], k = 2.
- Search space:
[10, 32] left = 10, right = 32, mid = 21: Can we split with max_sum 21? [7,2,5] (sum=14) and [10,8] (sum=18). 2 subarrays <= 2. Yes.right = 21.left = 10, right = 21, mid = 15: Can we split with max_sum 15? [7,2,5] (sum=14), [10] (sum=10), [8] (sum=8). 3 subarrays > 2. No.left = 16.left = 16, right = 21, mid = 18: Can we split with max_sum 18? [7,2,5] (sum=14), [10,8] (sum=18). 2 subarrays <= 2. Yes.right = 18.left = 16, right = 18, mid = 17: Can we split with max_sum 17? [7,2,5] (sum=14), [10] (sum=10)... wait, can we add 8? 10+8=18 > 17. So [10], [8]. 3 subarrays > 2. No.left = 18.left = 18, right = 18: Loop exits. Answer is 18.
The DP solution for this problem has O(n^2 * k) complexity. The binary search approach is O(n log S) where S is the sum of the array. For large inputs, this is dramatically faster.
Here is the beautiful thing: the problem statement says nothing about binary search. It says "minimize the largest sum." Recognizing that you can binary search on the answer - "What if the largest sum were X? Could I make it work?" - is the creative leap this pattern demands.
Problem 3: Minimum Days to Make Bouquets (The Sneaky One)
This is LeetCode 1482, and it is the problem that teaches you to recognize the pattern when it is deliberately hidden behind a story.
Problem statement: You have an integer array bloomDay where bloomDay[i] is the day on which the i-th flower blooms. You need to make m bouquets. Each bouquet requires k adjacent flowers. You can only use a flower if it has bloomed. Return the minimum number of days you need to wait to make m bouquets. If it is impossible, return -1.
The story dressing makes this feel like a simulation problem. Maybe you should simulate day by day? Maybe there is some greedy approach based on bloom ordering?
No. Step back and ask: what are we looking for? The minimum number of days. Days is a number. That number has a range. And the feasibility is monotonic - if you can make the bouquets by day d, you can definitely make them by day d+1 (more flowers are bloomed). If you cannot by day d, you cannot by any earlier day either.
Binary search on the answer.
Step 1: Define the Search Space
- Minimum possible answer:
min(bloomDay). Before the earliest bloom, no flowers are available. - Maximum possible answer:
max(bloomDay). By the day the last flower blooms, all flowers are available.
Edge case: if m * k > len(bloomDay), we do not have enough flowers regardless. Return -1 immediately.
Step 2: Write the Feasibility Function
Given a candidate day d, how many bouquets can we make? Scan the array left to right. A flower is available if bloomDay[i] <= d. Count consecutive available flowers. Every time we get k consecutive ones, that is one bouquet.
def can_make_bouquets(bloomDay, m, k, day):
bouquets = 0
consecutive = 0
for bloom in bloomDay:
if bloom <= day:
consecutive += 1
if consecutive == k:
bouquets += 1
consecutive = 0
else:
consecutive = 0
return bouquets >= m
O(n) per check. Clean.
Step 3: Binary Search
def min_days(bloomDay, m, k):
if m * k > len(bloomDay):
return -1
left, right = min(bloomDay), max(bloomDay)
while left < right:
mid = (left + right) // 2
if can_make_bouquets(bloomDay, m, k, mid):
right = mid
else:
left = mid + 1
return left
Example: bloomDay = [1, 10, 3, 10, 2], m = 3, k = 1.
We need 3 bouquets, each needing 1 adjacent flower. Since k=1, we just need 3 bloomed flowers.
- Search space:
[1, 10] mid = 5: Flowers bloomed by day 5: indices 0 (day 1), 2 (day 3), 4 (day 2). That is 3 flowers, 3 bouquets. Works.right = 5.mid = 3: Flowers bloomed by day 3: indices 0, 2, 4. Still 3 bouquets. Works.right = 3.mid = 2: Flowers bloomed by day 2: indices 0, 4. Only 2 bouquets. Does not work.left = 3.left = 3, right = 3: Answer is 3.
The sneaky part of this problem is the adjacency constraint. When k > 1, you must find k consecutive bloomed flowers. This makes the feasibility function slightly more involved, but the binary search framework is unchanged. The pattern carries the load; you just plug in the right feasibility check.
The Template That Works Every Time
After solving these three problems, the general template becomes clear. Here it is, abstracted into pseudocode that you can adapt to any "binary search on answer" problem:
def binary_search_on_answer():
# Step 1: Define the search space
left = minimum_possible_answer
right = maximum_possible_answer
# Step 2: Binary search
while left < right:
mid = (left + right) // 2
if is_feasible(mid):
# If we're looking for the MINIMUM valid answer:
right = mid
# If we're looking for the MAXIMUM valid answer:
# left = mid + 1
else:
# If we're looking for the MINIMUM valid answer:
left = mid + 1
# If we're looking for the MAXIMUM valid answer:
# right = mid
return left
There are two variants depending on whether you want the minimum or maximum valid answer:
Finding the minimum valid answer (most common):
- When feasible:
right = mid(the answer might be smaller) - When not feasible:
left = mid + 1(need to go higher)
Finding the maximum valid answer:
- When feasible:
left = mid + 1(the answer might be bigger) - When not feasible:
right = mid(need to go lower)
Wait - the maximum variant looks wrong. If mid is feasible and we want the maximum, why are we setting left = mid + 1? Because we are using the left < right termination condition with integer arithmetic. At the end of the loop, left == right, and that is our answer. If you use left = mid instead, you risk an infinite loop when left + 1 == right (because mid = (left + right) // 2 = left, and you set left = mid = left, nothing changes). To avoid this, use mid = (left + right + 1) // 2 (ceiling division) when setting left = mid. Alternatively, stick with the version above which avoids this pitfall entirely.
The feasibility function is_feasible(mid) is where the problem-specific logic lives. It takes a candidate answer and returns a boolean. The art of this pattern is recognizing that a given problem can be reframed as "Is answer X feasible?" with a monotonic yes/no boundary.
Time Complexity Analysis
If the search space has size S and the feasibility check takes O(f(n)), the total complexity is O(f(n) * log S).
- Koko Eating Bananas: O(n * log(max_pile))
- Split Array Largest Sum: O(n * log(sum))
- Minimum Days to Make Bouquets: O(n * log(max_bloom_day))
In all cases, log S is at most around 30-32 (since answer spaces rarely exceed 10^9). So the binary search adds a constant factor of about 30 to the feasibility check cost. This is almost always an excellent tradeoff.
Spotting "Hidden" Binary Search in Interviews
The hardest part of this pattern is not implementing it. The template is mechanical. The hard part is recognizing that a problem calls for it in the first place, especially when the problem is dressed up in a story that has nothing to do with searching.
Here are the signal words and characteristics I look for:
Signal Words in Problem Statements
- "Minimize the maximum..." - Almost always binary search on answer. You are searching for the smallest value of a maximum that is still feasible.
- "Maximize the minimum..." - Same pattern, flipped. Binary search for the largest value of a minimum that is feasible.
- "What is the minimum X such that..." - Direct signal. X is your search space.
- "Find the smallest/largest value that satisfies..." - Another direct signal.
- "Is it possible to achieve X in at most Y?" - The feasibility question is already written for you.
Structural Characteristics
- The answer is a number with a bounded range. You can define
leftandright. - There is a monotonic property. If answer X works, then X+1 also works (or vice versa). The feasibility function does not flip back and forth.
- You can check feasibility in polynomial time. Given a candidate answer, you can verify it efficiently (usually O(n) or O(n log n)).
- The brute-force approach would try every possible answer. If the naive solution is "try all values from 1 to N and check each one," binary search cuts that to log N.
More Problems That Use This Pattern
Here is a list of LeetCode problems that use binary search on answer, organized by difficulty:
Medium:
- 875 - Koko Eating Bananas
- 1011 - Capacity To Ship Packages Within D Days
- 1482 - Minimum Number of Days to Make m Bouquets
- 1283 - Find the Smallest Divisor Given a Threshold
- 2064 - Minimized Maximum of Products Distributed to Any Store
Hard:
- 410 - Split Array Largest Sum
- 774 - Minimize Max Distance to Gas Station
- 1231 - Divide Chocolate
- 2141 - Maximum Running Time of N Computers
Every one of these follows the same three-step structure: define the search space, write the feasibility check, binary search. The problems differ only in the feasibility function.
Practice Strategy
If you want to master this pattern, I recommend the following progression:
- Start with Koko Eating Bananas (875). This is the purest example. The feasibility function is trivial. Focus on internalizing the template.
- Do Capacity To Ship Packages (1011). Same difficulty, slightly different feasibility check. Builds confidence.
- Do Minimum Days for Bouquets (1482). The adjacency constraint makes the feasibility function more interesting.
- Do Split Array Largest Sum (410). The greedy feasibility function is the new challenge here.
- Attempt Minimize Max Distance to Gas Station (774). This requires floating-point binary search, which introduces a new wrinkle (use a precision-based termination condition instead of
left < right).
After solving five problems with this pattern, something clicks. You stop seeing individual problems and start seeing the structure. The feasibility function changes, but the binary search wrapper stays the same. That is the sign that you have internalized the pattern.
For structured practice with instant feedback on your approach, Levelop's coding practice module groups problems by pattern - including a dedicated binary search track that progresses from basic sorted-array problems to the "binary search on answer" problems covered here.
The next time you are in an interview and the brute force involves iterating over all possible values of the answer, stop. Ask yourself: is there a monotonic property? Can I binary search this? More often than you think, the answer is yes.
And unlike the candidate who grinds 500 LeetCode problems hoping to memorize every solution, you will walk into that interview with a tool that adapts to problems you have never seen before. That is the difference between memorization and understanding. And understanding is what gets you through the problems that matter.
Frequently Asked Questions
What is binary search on answer?
Binary search on answer is a technique where instead of searching for a target in a sorted array, you binary search over the space of possible answers to an optimization problem. The idea is that if the answer to a problem is some number in a range [lo, hi], and there exists a monotonic "feasibility function" - meaning if answer X is feasible then all answers greater than X are also feasible (or vice versa) - you can use binary search to efficiently find the optimal answer. For each candidate answer (the midpoint), you run a feasibility check that determines whether that answer is achievable. Based on the result, you eliminate half the search space. This reduces what would be a linear scan over all possible answers into a logarithmic number of feasibility checks. The technique appears frequently in FAANG interviews and is applicable to a wide range of optimization problems including resource allocation, scheduling, and partitioning.
How is binary search used beyond sorted arrays?
The core principle of binary search is not "search in a sorted array" - it is "eliminate half the search space based on a condition." Any scenario where you can evaluate a condition at a midpoint and definitively rule out one half of the space is a candidate for binary search. Beyond sorted arrays, common applications include: binary search on answer (searching the space of possible answers to optimization problems), binary search on the result of a monotonic function (e.g., finding the square root of a number), binary search on time (e.g., "what is the earliest day by which condition X is met?"), and binary search on index with a predicate (e.g., finding the first version that introduced a bug, as in LeetCode 278). The unifying theme is the monotonic property: some predicate flips from false to true (or true to false) at exactly one point, and binary search efficiently finds that transition point.
What is the time complexity of binary search on answer?
The time complexity is O(f(n) * log S), where f(n) is the cost of the feasibility check and S is the size of the search space (the range of possible answers). For example, in Koko Eating Bananas with n piles and a maximum pile size of M, the feasibility check is O(n) and the search space is [1, M], so the total complexity is O(n log M). In Split Array Largest Sum with array sum S, it is O(n log S). The log S factor is typically very small in practice - for answer spaces up to 10^9, log S is about 30. This makes binary search on answer extremely efficient compared to brute-force approaches that check every possible answer value, which would be O(f(n) * S). The space complexity is usually O(1) beyond the input, since the binary search itself only needs a few variables and the feasibility check is typically a single pass through the data.
How do I know when to apply binary search to a problem?
Look for three signals. First, the problem asks you to optimize (minimize or maximize) a numerical value - "minimum speed," "maximum minimum distance," "smallest divisor," etc. If the answer is a number with a definable range, that is signal one. Second, there is a monotonic relationship between the answer and feasibility: if eating speed 5 works, then speed 6 also works; if you cannot split into 3 subarrays with max sum 10, you definitely cannot with max sum 9. This monotonic property is essential - without it, binary search cannot eliminate half the space. Third, you can write an efficient feasibility check: given a candidate answer, you can verify in O(n) or O(n log n) whether it is achievable. If all three conditions hold, binary search on answer almost certainly applies. A practical heuristic: if the brute-force solution is "try every possible value of the answer and check each one," that is a linear scan over the answer space - and binary search can replace it. The phrase "minimize the maximum" or "maximize the minimum" in a problem statement is nearly a guarantee that this pattern applies.