CSCI 3650
Summer 2001

Last modified 6/17/01

Announcements

Practice questions for exam 3 are available. Solutions to the practice questions are also available.

Syllabus

This is a course in design and analysis of algorithms. See the syllabus for details.


Office hours

M-Th 1:15-2:00. I will stay a little later if there is demand.

Sample exams

Exercises

  1. Homework assignment 1
  2. Homework assignment 2
  3. Homework assignment 3
  4. Homework assignment 4
  5. Homework assignment 5
  6. Homework assignment 6

Checking your grade

If you have obtained a password then you can check your grades as I have them recorded. You can obtain a password from me. To obtain one by email, send me email indicating the password that you want. Choose a string of 4 or 5 letters/digits.

Lecture summaries

Very brief lecture summaries will be posted here. Do not expect them to be substitutes for coming to class.
  1. [5/15/01] We did a very brief overview of the field of analysis of algorithms and began to look at an example problem: integer multiplication. We defined the problem, and analyzed four algorithms for it. The algorithms are (1) the repeated addition algorithm that comes directly from the definition of multiplication; (2) the grade school algorithm; (3) the Russian Peasants' algorithm and (4) the divide-and-conquer algorithm. The last three algorithms take time proportional to n2 to multiply to n digit numbers.

  2. [5/16/01] We finished looking at integer multiplication by finding an improvement to the divide-and-conquer algorithm that reduces the time required to about n1.62. We then turned to material in chapter 2 of Cormen/Leiserson/Rivest on relationships among functions, and solved an elementary summation. This material will be needed for analyzing algorithms.

  3. [5/17/01] We solved summation 1 + 2 + ... + n, and covered mathematical induction. We looked at recurrences, and saw the master theorem on page 62 of Cormen/Leiserson/Rivest. This material is in chapters 3 and 4 of Cormen/Leiserson/Rivest.

    Mathematical induction can be viewed as a kind of proof by contradiction. You prove that a claim about all integers n by showing that there is no counterexample to the claim. You do that by supposing that there is a counterexample, and looking at the SMALLEST counterexample. That way, you know that any value smaller than the purported smallest counterexample cannot be a counterexample, and you can use your claim for those smaller values.

  4. [5/18/01] We began looking at the sorting problem. We looked at a very naive sorting, the insertion sort algorithm, binary insertion sort, and the mergesort algorithm.

  5. [5/21/01] We went over homework questions. Then we continued looking at sorting. Mergesort has time recurrence T(n) = 2T(n/2) + n, which has solution T(n) = THETA(n lg(n)). We defined comparison algorithms and began looking at decision trees that describe how comparison algorithms work.

  6. [5/22/01] We proved a lower bound of log2(n!) on the number of comparisons that must be done, in the worst case, for sorting n numbers by a general comparison algorithm. Lower bounds for sorting are discussed in Section 9.1 of Cormen/Leiserson/Rivest. Then we looked at solution of problem 7-2, page 73 (the missing number problem). We also covered bucket sort. Bucket sort is discussed in Section 9.4 of the text.

  7. [5/23/01] We looked at the selection problem, and derived a linear-time algorithm. This is in Chapter 10 of Cormen/Leiserson/Rivest.

  8. [5/24/01] We finished the linear-time selection problem and began to look at the maximum subvector problem discussed in Chapter 8 of Bentley. We derived a very naive cubic time algorithm, a somewhat less naive but still slow quadratic time algorithm, and a divide-and-conquer algorithm that takes time O(n lg(n)).

  9. [5/25/01] We looked more at the maximum subvector problem. Scan algorithms were introduced, and we derived a linear time scan algorithm for the maximum subvector problem. Then we turned to elementary computational geometry, and saw how to determine whether a point lie above or below a line, and how to tell if a sequence of three points represents a right turn or a left turn or no turn. We introduced the 2-dimensional convex hull problem.

  10. [5/28/01] Exam 1.

  11. [5/29/01] We went over the exam and then began to look at the 2-dimensional convex hull problem. We covered the Jarvis March algorithm. This material is in Cormen/Leiserson/Rivest, chapter 35.3.

  12. [5/30/01] We looked at more on the convex hull problem. We finished the divide-and-conquer algorithm, and looked at Graham's scan algorithm.

  13. [5/31/01] We discussed the concept of amortized time. An algorithm that makes many steps might spend much longer on a few steps than on most steps. The amortized time is the average time over many steps.

    We began to look at the string matching problem. We saw the naive algorithm and the ideas behind the Knuth/Morris/Pratt algorithm.

  14. [6/1/01] Second try for exam 2.

  15. [6/4/01] We discussed the exam and then looked in more detail at the Knuth/Morris/Pratt algorithm.

  16. [6/5/01] We finished the Knuth/Morris/Pratt string matching algorithm by seeing how to build the pattern matching machine. Then we discussed optimization problems, including three general approaches: brute-force search, greedy approaches and dynamic programming.

    Greedy algorithms rely on the following ideas.

    1. When faced with a decision, make the decision immediately and commit to it.
    2. When making a decision, concentrate on optimizing locally. Only pay attention to the short term.

    Dynamic programming algorithms rely on the following ideas.

    1. When faced with a decision, try all possibilities, and select the one that gives the best results. Do not try to make the decision based on what looks good locally.
    2. Avoid repeating work by memoizing your results. Whenever a result is computed, write it down. Instead of computing new results, look them up when available. (Usually, dynamic programming algorithms are carefully designed so that all needed results have always already been computed and written down.)

  17. [6/6/01] We looked at dynamic programming. We used the problem of computing binomial coefficients to illustrate memoizing. Then we developed a dynamic programming algorithm for the edit distance problem. (This problem is presented as an exercise in Cormen/Leiserson/Rivest. Once you understand dynamic programming, deriving algorithms like this is not very difficult.) We also began looking at the all-pairs shortest path problem. The dynamic programming algorithm for this problem is attributed both to Floyd and to Warshall, who discovered it independently, and it is commonly called the Floyd/Warshall algorithm.

  18. [6/7/01] We finished the Floyd/Warshal algorithm and also derived a dynamic programming algorithm for the matrix chain product problem.

  19. [6/8/01] Exam 2.

  20. [6/11/01] We looked a greedy algorithms, including the activity selection problem and Prim's minimal spanning tree algorithm. Material on greedy algorithms is in Chapter 17 of the text.

  21. [6/12/01] We finished greedy algorithms by seeing that Dijkstra's shortest path algorithm is a greedy algorithm that is a very small modification of Prim's minimal spanning tree algorithm. Then we began looking at NP-completeness. We define the notion of a decision problem, and defined the class P of decision problems solvable in polynomial time. Nondeterministic algorithms were introduced. Material on the theory of NP-completeness is in Chapter 36 of C/L/R.

  22. [6/13/01] We continued to talk about NP-completeness. After looking some more at nondeterministic algorithms, we defined the class NP, the notion of polynomial-time reductions and the class of NP-complete problems.