- [6/22/00] We discussed big-O notation and properties of
logarithms. We began talking about algorithms for addition and
multiplication. Addition of n-digit numbers can be done in
O(n) time, while multiplication of n-digit numbers can be
done in O(n2) time. We discussed the naive
multiplication algorithm (O(10n) time), the standard
algorithm (quadratic time) and an algorithm based on the
observation that, when n is even, (n)(x) = 2((n/2)x).
Big-O notation is discussed in the text on page 26. Logarithms
and properties of logarithms are on page 34. The grade-school
multiplication algorithm is discussed briefly on pages 671 and 672.
- [6/23/00]
We discussed more integer multiplication algorithms, including
the Russian Peasant's algorithm (described on page 681 of the text),
and a divide-and-conquer algorithm. If you are careful how you
do the divide=and-conquer, you can arrange to do only 3 recursive
calls. That leads to an algorithm that runs in O(n^c) time for
c = log2(3). The value of the exponent c is about 1.59.
Your test discussed the
divide-and-conquer algorithm is on page 797, as an exercise.
The best know multiplication algorithm for large numbers is
the Schonhage/Strassen algorithm, which takes
O(n log(n) log(log(n))) time.
- [6/26/00] We discussed hybridizing two algorithms to get
the best of both. Big-O, little-o, Big-Omega, little-omega and
Big-Theta notation were introduced. You can compare two functions
asymptotic growth by taking the limit of their ratio, as n approaches
infinity.
We solved some summations, and introduced mathematical induction.
Using induction, you prove that a claim
about all integers n by showing that there is no
counterexample to the claim. You do that by supposing that there
is a counterexample, and looking at the SMALLEST counterexample.
That way, you know that any value smaller than the purported
smallest counterexample cannot be a counterexample.
- [6/27/00] We discussed the homework.
We discussed solution of recurrences that result from
divide-and-conquer algorithms using mathematical induction.
- [6/28/00] We covered more on the solution of recurrences, including
the master theorem (page 62 of the text). We began looking at the
sorting problem, and analyzed merge sort. Merge sort is described in
chapter 1 of the text.
- [6/29/00] We covered more material on sorting, including
lower bounds. We will also began to look at
selection problems, including finding the minimum and finding both
the minimum and maximum.
- [6/30/00] We showed, by an adversary argument, that every
algorithm for finding the minimum and maximum of n values by doing
comparisons requires at least ceiling(3n/2) - 2 comparisons. That
is precisely what the tournament algorithm does, so the tournament
algorithm makes an optimum number of comparisons.
We began looking at the general selection problem, and developed
a skeleton of an algorithm. The full algorithm depends on
an algorithm to find an approximation to the median of a list of
numbers, which will be derived next time. Exercise 3 was handed out.
See above.
- [7/3/00] We developed the general selection algorithm, and began
looking at a divide-and-conquer algorithm for computing the convex
hull of a set of points in the plane.
- [7/5/00] We went over exercises, and finished the two-dimensional
convex hull algorithm, except for the detail of comparing a point
to a line.
- [7/6/00] We went over questions on the practice exam, and
covered how to compare a point to a line.
- [7/7/00] Exam 1.
- [7/10/00] We did a reduction from the sorting problem to
the two-dimensional convex hull problem, showing that any algorithm
that finds the convex hull of n points can be used to sort n
real numbers. This shows that the convex hull problem is no easier
than sorting.
We saw the cross product method of determining which direction
an angle turns. It is in the book on page 888.
We began looking at algorithms that sweep over a set of data once.
An example is an algorithm that computes the maximum of n numbers.
Another example is Graham's algorithm for computing convex hulls
in two dimensions. We saw how that algorithm works, and did an
analysis of it. Graham's algorithm works by sorting the points
from left to right (O(n lg(n)) time) and then scanning from left to
right. An accounting argument shows that the scan phase takes
O(n) time.
- [7/11/00] We finished analyzing Graham's algorithm, and looked
at another sweep algorithm that finds the longest ascending subsequence
of a sequence. The algorithm involved several ideas, including
performing a sweep, remembering extra information (not just the answer)
to make it possible to update the information, using binary search
to speed up searching, and using linked lists to prevent copying
of sequences.
- [7/12/00] We looked at the string matching problem and
the Knuth-Morris-Pratt algorithm. That algorithm is described
in the text in section 34.4.
- [7/13/00] We finished the Knuth-Morris-Pratt algorithm by
showing how to compute the failure function.
We also began studying dynamic programming. See chapter
16 of the text. Dynamic programming is used to solve optimization
problems that have the following characteristic.
Optimal solutions to a problem are build from optimal
solutions to subproblems.
Dynamic programming typically starts with a recursive algorithm,
building up an optimal solution to a problem by finding optimal
solutions to subproblems. But the recursion usually has the
property that it computes the same thing many time. To avoid that,
dynamic programming employs memoizing, storing the solutions
to things that have already been computed so that they do not
need to be computed again.
- [7/14/00] We did examples of dynamic programming,
including the edit distance problem. We began to look at the
matrix chain multiplication problem.
- [7/17/00] We finished the matrix chain multiplication problem,
and started looking at the all-pairs shortest path problem.
- [7/18/00] We finished dynamic programming by finishing the
all-pairs shortest paths problem (the Floyd/Warshall algorithm),
and doing the longest common subsequence problem.
- [7/19/00] We began looking at greedy algorithms. We discussed
Kruskall's algorithm for computing minimal spanning trees.
- [7/20/00] We looked at Prim's algorithm for minimal spanning
trees.
- [7/21/00] We used a variation of Prim's algorithm to
computes shortes paths. The shortest path algorithm is called
Dijkstra's algorithm. We also looked at the activity selection
algorithm discussed in the book in chapter 17.
- [7/24/00] We began looking at the theory of NP-completeness.
We defined the class P of problems solvable in polynomial time.
We defined the notion of reductions among problems.
- [7/25/00] We continued looking at NP-completeness.
We defined the class NP, and defined NP-complete problems.
- [7/26/00] We will look at particular NP-complete problems,
and examples of reductions among problems that are used to show
a problem NP-complete.