Algorithm analysis is concerned with determining how much of a resource, such as time or memory, an algorithm uses as a function of some characteristic of the input to the algorithm, usually the size of the input.
It is customary practice to use variable n for the number of characters that it takes to write down the input to an algorithm. Then the time or memory is given as a function of n. For example, a particular algorithm might take time cn2 for some constant c.
Usually it is not worth determining the exact cost of running an algorithm. An approximation is all you need. In fact, we usually want to know how the cost depends on n as n grows, not what the exact cost is, and it is enough to know that the cost is proportional to some function of n. For expressing that, we use O, Ω and Θ notation.
Let f(n) and g(n) be two functions that take an integer n and yield an integer or real number. f(n) and g(n) are mathematical functions, not C++ functions.
Definition. Say that f(n) is O(g(n)) if there are constants c and n0 so that, for all n ≥ n0, f(n) ≤ cg(n). That is, for all sufficiently large values of n, f(n) ≤ cg(n). |
Definition. Say that f(n) is Ω(g(n)) if g(n) is O(f(n)). |
Definition. Say that f(n) is Θ(g(n)) if f(n) is O(g(n)) and f(n) is Ω(g(n)). |
For example,
5n2 is O(n2). (Choose c = 5. Then 5n2 < cn2 for all n.)
3n2 + 2n + 10 is O(n2). (Choose n0 = 10 and c = 5. For all n ≥ 10, 3n2 + 2n + 10 ≤ 5n2.)
2n is O(n2). (Choose n0 = 2 and c = 1.)
5n2 is Ω(n2) because n2 is O(5n2).
Since 5n2 is Ω(n2) and 5n2 is O(n2), 5n2 is Θ(n2).
3n2 + 2n + 10 is Ω(n2). (What values of n0 and c will work?)
3n2 + 2n + 10 is also O(n2), so 3n2 + 2n + 10 is Θ(n2).
n2 is not O(2n). Function g(n) = n2 grows much faster than f(n) = 2n.
When comparing functions using O, Ω and Θ, polymomials depend only on their degree. If f(n) is a polynomial of degree d and g(n) is a polynomial of degree e, then
f(n) is O(g(n)) if d ≤ e.
f(n) is Ω(g(n)) if e ≤ d.
f(n) is Θ(g(n)) if d = e.
x = log2(y) is defined to be the solution for x to equation y = 2x. For example, since 23 = 8, log2(8) = 3. Here are a few logarithms.
x | log2(x) |
---|---|
16 | 4 |
32 | 5 |
64 | 6 |
128 | 7 |
256 | 8 |
512 | 9 |
1024 | 10 |
1,000,000 | ~20 |
1,000,000,000 | ~30 |
One way to compute an approximate logarithm of an integer x is to start with x and then perform steps where, at each step, you take half of the result of the previous step. But if you get a result that is not an integer, then round down. Stop when you reach 1. Then count the number of halving steps that you did. If you did h halving steps, then h is the largest integer that is ≤ log2(x). For example,
1000 500 250 125 62 31 15 7 3 1involves 9 halving steps, so 9 ≤ log2(1000) ≤ 10. In fact, log2(1000) is just a little less than 10.
In this course, we write log(n) to mean log2(n).
Here is a comparison of some common functions of n in terms of how fast they grow as n gets large. The list is in order from slowly growing functions to rapidly growing functions. If f(n) occurs earlier in the list than g(n) then f(n) is O(g(n)) but g(n) is not O(f(n)). For example, n2 is O(n3), but n3 is not O(n2).
log2(n) |
(log2(n))2 |
√n |
n |
n log2(n) |
n2 |
n3 |
1.01n |
2n |
True or false: n2 is O(n3). Answer
True or false: n3 is O(n2). Answer
True or false: n3 + 2n2 − n is \Theta(n3). Answer
True or false: n3 + 2n2 − n is \Theta(n5). Answer
What is log2(32)? Answer
What is log2(128)? Answer
What is the largest integer n such that n < log2(20)? Answer
What is the largest integer n such that n < log2(150)? Answer
True or false: n is O(log2(n)). Answer
True or false: nlog2(n) is O(n2). Answer
True or false: n2 is O(nlog2(n)). Answer
True or false: n2 is O(1.1n) Answer