CSCI 3650
Summer 2002

Last modified 6/24/02

Announcements

The last exam will be on Wednesday, June 26. Practice questions are available. Solutions to the practice problems are also available.

Homework assignment 5 is available.

Solutions to the second exam are available.


Syllabus

This is a course in design and analysis of algorithms. See the syllabus for details.


Office hours

M-Th 1:15-2:00. I will stay a little later if there is demand.


Sample exams

  1. Practice questions for exam 1
  2. Answers to the practice questions for exam 1
  3. Practice questions for exam 2
  4. Answers to the practice questions for exam 2

Exercises

  1. Homework assignment 2
  2. Homework assignment 3
  3. Homework assignment 4
  4. Homework assignment 5


Checking your grade

If you have obtained a password then you can check your grades as I have them recorded. You can obtain a password from me. To obtain one by email, send me email indicating the password that you want. Choose a string of 4 or 5 letters/digits.

Lecture summaries

  1. [5/21/02] We began looking at algorithms for addition and multiplication of integers as an introduction to the concepts of analysis of algorithms. The grade school algorithm for addition is optimal, since its time is within a constant factor of the time of any other addition algorithm. We looked at three multiplication algorithms, a naive algorithm, the grade school algorithm and the Russian Peasants' algorithm. We did not finish an analysis of the Russian Peasants' algorithm.

  2. [5/22/02] We continued looking at integer multiplication. The grade school algorithm and Russian Peasants' algorithm both take time O(n2) to multiply two n digit numbers. We looked at a divide-and-conquer algorithm as well. The straightforward version of the divide-and-conquer algorithm also runs in time O(n2), but an improved version runs in time O(nlog(3)), where the log is to base 2.

    You should skim over chapters 1 and 2 of the text. Do not get stuck on details of the algorithms that are studied there. The next lecture will begin looking at chapter 3, growth of functions.

  3. [5/23/02] We discussed material on the growth of functions, including O, Omega and Theta notation and using limits to compare functions. We also solved some simple summations. This material is from chapter 3 of the text (Growth of Functions).

  4. [5/24/02] We went over the homework and then began looking at mathematical induction and solution of recurrences. We solved the recurrence T(n) = 4T(n/2) + cn and T(1) = 1 using expansion and also using induction to verify a proposed solution. This material is in chapter 4 of the text (Recurrences). Our next topic will be the Master Theorem, which tells the solution to many recurrences.

  5. [5/27/02] Holiday

  6. [5/28/02] We looked at the Master Theorem from the text, and used it to solve some recurrences. We looked at the mergesort algorithm, and briefly at the quicksort algorithm. Mergesort sorts a list of n numbers in time O(n lg(n)) if each comparison between numbers takes a constant amount of time. We did not write the merge function. Here it is written in Astarte.

         Define merge by first
           case merge([], ?x) = x
           case merge(?x, []) = x
           case merge(?a::?b, ?c::?d) = a :: merge(b, c::d)   when a < c
           case merge(?a::?b, ?c::?d) = c :: merge(a::b, c)
         %Define
    

  7. [5/29/02] We completed the analysis of quicksort, and then looked at the problem of computing the convex hull of a set of points in the plane. We used a divide-and-conquer algorithm that is based on combining two convex hulls by finding the upper and lower bridges. We did not discuss how to split the input points efficiently, but will do that next time.

  8. [5/30/02] We finished the design and analysis of the divide-and-conquer algorithm for finding the convex hull of a set of points in the plane. The key is to sort the points from left to right once, before starting the divide-and-conquer phase. We discussed the equivalence of sorting and convex hull in the plane, and we proved that every comparison algorithm for sorting takes OMEGA(n lg(n)) time. Then we began looking at the selection problem.

  9. [5/31/02] We went over the homework and then developed a linear time selection algorithm. The selection algorithm is discussed in chapter 9 of the text.

  10. [6/3/02] We began looking at scan algorithms. Simple examples are algorithms to find the maximum value in a list and to find the sum of a list of numbers. We looked at some elementary geometric computations and then saw Graham's algorithm for finding convex hulls in the plane. Graham's algorithm is described in chapter 33 of the text (Computation Geometry).

  11. [6/4/02] We looked at algorithms to solve the maximum subvector problem. The problem is to find a contiguous section of a list of numbers (possibly including negative numbers) that has the largest possible sum. We developed a naive algorithm, based directly on the definition, that solved this problem in time O(n3), where n is the size of the array. A simple improvement brought the time down to O(n2). A divide-and-conquer algorithm solves the problem in time O(n log(n)). Finally, we developed a scan algorithm that not only solves this problem in time O(n), but is also the shortest of all of these algorithms to write down.

  12. [6/5/02] No class.

  13. [6/6/02] We looked at the Knuth/Morris/Pratt string matching algorithm, up to the point of computing the failure function. This is in chapter 32 of the text (String Matching).

  14. [6/7/02] Exam 1

  15. [6/10/02] We went over the exam and then finished the Knuth/Morris/Pratt algorithm by seeing how to compute the failure function in linear time. We say how to use fast string matching to determine whether one string is a cyclic permutation of another.

  16. [6/11/02] We began looking at algorithms that are based on data structures.

    We say the Merge/Find algorithm, including the basic algorithm along with two improvements: weighted merge and collapsing find. The basic algorithm takes O(n2) time to process n instructions. If either of the improvements is done without the other improvement, the time drops to O(nlg(n)). If both improvements are done, the time drops further to O(n alpha(n)) where alpha(n) is the inverse of Ackerman's function. alpha(n) grows extremely slowly. Just for curiosity, Ackerman's function A(n) is defined as B(n,n), where B(i,j) is defined as follows.

         B(1,j) = 2j    (for j > 0)
         B(i,1) = B(i-1,2)         (for i > 1)
         B(i,j) = B(i-1, B(i,j-1)) (for i > 1, j > 1)
      

    The Merge/find algorithm is covered in Chapter 21 of the text.

  17. [6/12/02] We looked at the Heapsort algorithm, based on the notion of a heap. Physically, heapsort works on an array. But the array is imagined to be a tree, and logically heapsort works on the tree. Heapsort is described in Chapter 6 of the text.

  18. [6/13/02] We began looking at greedy optimization algorithms. A greedy algorithm tries to optimize globally by optimizing locally. It tries to get as much as possible in the very next step, or to pay as little as possible for the next step, without thinking ahead.

    We looked at Kruskall's algorithm, which is a greedy algorithm for finding minimal spanning trees. Kruskall's algorithm needs to use the Merge/find algorithm for support. Its time is dominated by its first step, sorting the edges by weight. That step takes time O(e log(e)), where e is the number of edges.

    Minimal spanning trees are covered in Chapter 23 of the text.

  19. [6/14/02] We finished Kruskall's algorithm by showing that it is correct. We also looked at Prim's algorith for computing minimal spanning trees, which is based on the same principles as Kruskall's algorithm, but does not need to use the Merge/find algorithm to keep track of a forest of trees. A simple variant of Prim's algorithm runs in time O(n2) when there are n vertices. A more sophisticated one uses a heap for support, and runs in time O(e log(n)), where e is the number of edges. This is similar to Kruskall's algorithm.

  20. [6/17/02] We discussed the activity selection algorithm that the text discusses in Chapter 16. Then we began looking at dynamic programming algorithms. They are covered in Chapter 15 of the text. There are two principles behind dynamic programing algorithms. First, to make a decision on how to select a best solution, try all of the available decisions, and select the best one. Second avoid recomputing temporary results many times by writing the results down, and looking them up in a table. An example is an algorithm that finds a route through a grid of streets that costs as little as possible, where the each street has a toll booth.

  21. [6/18/02] We discussed the exam and then turned back to dynamic programming. We began looking at the edit distance problem.

  22. [6/19/02] Exam 2.

  23. [6/20/02] We finished the edit distance problem and solved the matrix chain multiplication problem. See the Dynamic Programming chapter of the text.