Recursive algorithms reduce a problem of size n to one or more sub-problems of smaller size, solve those recursively, and then combine the results. Their running time cannot be expressed as a simple closed-form sum — it is defined recursively too, as a recurrence relation. The goal of this lecture is to develop methods for solving such relations and extracting their asymptotic class.
We already encountered this in the Fibonacci case: the running time of the naive recursive algorithm satisfied T(n)=T(n−1)+T(n−2)+1, and the analysis required unrolling the recursion to obtain exponential bounds. Here we build a systematic toolkit.
Setup and Conventions
A recurrence relation arising from a recursive algorithm typically looks like:
T(n)={Θ(1)aT(n/b)+f(n)n=1n>1
Before applying any method, two simplifications are standard and do not affect the asymptotic result:
Replace asymptotic notation with concrete constants.Θ(1) becomes d, O(n) becomes cn, and so on. This lets us work with equalities rather than set memberships.
Drop floors and ceilings.⌊n/b⌋ and ⌈n/b⌉ both become n/b. This is justified: the asymptotic behavior is unchanged, and the algebra becomes much cleaner.
So the canonical form we work with is:
T(n)={1aT(n/b)+cnβn=1n>1
where all constants are positive.
The Iteration Method
The iteration method is a direct, mechanical approach: substitute the recurrence into itself repeatedly until a pattern emerges, then evaluate at the base case.
Recursion ends when n/4i=1, i.e., i=log4n. The geometric sum converges since 3/16<1:
∑k=0i−1(163)k=3/16−1(3/16)i−1=1316(1−(3/16)i)
So the total is 3log4n⋅T(1)+1316n2(1−3/n2)=nlog43+1316n2−1348. Since log43≈0.79<2, the n2 term dominates: T(n)=Θ(n2).
The Master Theorem
The Master Theorem gives an immediate answer for a large family of divide-and-conquer recurrences without requiring the iteration method's algebra. Its domain of applicability is recurrences of the form:
T(n)={daT(n/b)+f(n)n=1n>1
with a≥1, b>1, and f(n) asymptotically positive. This models algorithms that split a problem of size n into a subproblems each of size n/b, with f(n) being the work done at the current level (the "combine" cost).
The Theorem
The key quantity is nlogba. The three cases each correspond to a different relationship between f(n) and this threshold:
Case 1. If f(n)=O(nlogba−ϵ) for some ϵ>0, then T(n)=Θ(nlogba).
The recursive work dominates. The combine cost f(n) grows strictly slower than nlogba.
Case 2. If f(n)=Θ(nlogba), then T(n)=Θ(nlogbalogn).
The recursive work and the combine cost are evenly matched, so each level of recursion contributes equally and the logn levels accumulate a logn factor.
Case 3. If f(n)=Ω(nlogba+ϵ) for some ϵ>0, anda⋅f(n/b)≤c⋅f(n) for some c<1 and all sufficiently large n, then T(n)=Θ(f(n)).
The combine cost dominates. The additional condition — called the regularity condition — ensures that f behaves consistently across recursion levels and is not growing in a pathological way. For well-behaved functions (polynomials, logarithms) it holds automatically.
Simplified Version
When f(n)=cnβ (a pure polynomial combine cost), the theorem reduces to a clean comparison of α=logba and β:
Condition
Result
α>β
T(n)=Θ(nα) — recursion dominates
α=β
T(n)=Θ(nαlogn) — balanced
α<β
T(n)=Θ(nβ) — combine dominates
This is the version used in most practical analyses. The three examples from the iteration method can be verified immediately:
The theorem requires the recurrence to have the exact form aT(n/b)+f(n). It does not apply when subproblems have different sizes (e.g., T(n)=T(n/2)+T(n/3)+n), when the argument decreases by subtraction rather than division (e.g., T(n)=T(n−1)+c), or when f(n) falls in a gap between cases 1 and 2 or 2 and 3 without the required ϵ-polynomial separation. The Fibonacci recurrence T(n)=T(n−1)+T(n−2)+1 falls entirely outside its scope.
Application: Binary Search
The recursive formulation of binary search produces the recurrence:
T(n)={1T(n/2)+1n=0n>0
The base case n=0 corresponds to an empty search space (the recursion bottoms out when i>j). Each call does O(1) work plus one recursive call on a half-sized subarray.
By the Master Theorem: a=1,b=2,α=log21=0,β=0. Since α=β:
T(n)=Θ(n0logn)=Θ(logn)
This confirms the iterative analysis from the previous lecture.
Exact Solution for Fibonacci Recurrence
The bounds from Lecture 1 were T(n)=Ω(2n/2) and T(n)=O(2n). These are correct but loose. The exact value can be pinned down.
Theorem. Let T(n) be the number of nodes in the recursion tree of Fib2. Then T(n)=2Fn−1.
The proof is by induction. The conclusion is that T(n)=Θ(Fn).
Now, from the closed formula: Fn=51(φn−φ^n) where φ≈1.618 and ∣φ^∣<1. For large n the φ^n term vanishes and Fn∼φn/5, so Fn=Θ(φn).
Therefore:
T(n)=Θ(φn)≈Θ(1.618n)
This is a tighter result than the earlier bounds of Ω(1.41n) and O(2n). The exact base of the exponential is the golden ratio. The earlier bounds were correct but not tight — the true growth lies strictly between 2n and 2n.
Worked Exercises
Exercise A — Computational Complexity
Analyze the cost T(n) of the following algorithm in terms of n:
function mystery1(int n) → int:
x = 0
while n ≥ 1 do:
n = n/2
x = x + 1
tot = tot + mystery2(x)
return tot
function mystery2(int n) → int:
let A[1···n] be a new Array
for i = 1 ··· n do:
A[i] = 0
return A[n]
Step 1: analyze mystery2. The loop executes exactly n times with O(1) work per iteration. Cost: Θ(n).
Step 2: analyze the while loop in mystery1. The variable n is halved at each iteration; as in binary search, the loop runs log2n times. The variable x counts iterations, so at the k-th iteration (k=1,2,…,logn), mystery2 is called with input x=k.
The point of this exercise is that the argument to mystery2 is not n — it is the loop counterx, which grows from 1 to logn. Treating it carelessly as Θ(n) per call and multiplying by logn would give the wrong answer.
Exercise B — Recurrence Relations
Solve the following recurrence using both methods:
T(n)={d2T(n/4)+cn≤1n>1
Master Theorem. Here a=2, b=4, α=log42=1/2, β=0. Since α>β:
Recursion ends when n/4i=1, i.e., i=log4n. The geometric sum evaluates to 2i−1. Substituting i=log4n:
T(n)=2log4n⋅d+c(2log4n−1)=nd+c(n−1)=Θ(n)
The two methods agree. It is worth pausing on 2log4n=n: in general, alogbn=nlogba, a useful identity that appears repeatedly when evaluating iteration method results at the base case.
Summary
Three methods for solving recurrences are available, suited to different situations:
The iteration method works on any recurrence but requires identifying the closed form of a sum at the end — usually a geometric or arithmetic series. It is always applicable but can be algebraically demanding.
The Master Theorem is fast and mechanical when applicable: compute α=logba, identify β from f(n)=cnβ, compare, and read off the answer. Its limitation is that it handles only the divide-and-conquer form aT(n/b)+f(n) with a polynomial combine cost.
A fourth method — substitution (guess the form of the solution, then prove it by induction) — is mentioned in the course but not developed in these slides. It is particularly useful when you already have a conjecture from one of the other methods and want to prove it rigorously.
The key quantity in all divide-and-conquer analysis is the comparison between the cost of recursion (nlogba, which encodes how many subproblems accumulate across the recursion tree) and the cost of the combine step (f(n)). Whichever grows faster determines the overall complexity; when they are equal, the logn levels of recursion each contribute equally and produce the Θ(nlogbalogn) middle case.