CSCI 4627
Spring 2005

Last modified: 4/10/05

Announcements

There will be an exam on Friday, April 15 concerning material from chapters 5 and 6 of the text. Practice questions are available.


Syllabus

See the syllabus for a description of the course.


Office hours

MW10:30-11:45 and MW 2:00-3:15.


Manuals


Assignments

Please see the checklist for writing and turning in programs.

General homework

  1. Exercise 1
  2. Exercise 2
  3. Exercise 3
  4. Exercise 4
  5. Exercise 5
  6. Exercise 6

Compiler project

You will be finishing your compiler by making it generate code for an abstract machine. See project, part 5. See here for a human-readable description of the abstract machine.

The abstract machine has a (binary) machine language called the byte code, a (symbolic) assembly language and an interpreter that reads and executes a byte code. You can get an implementation of this machine from the following links. To use the abstract machine, you will want to get them all.

You can see a sample C- compiler written as an attribute grammar in Astarte. This compiler does not perform much type checking, but does translate to the abstract machine.

  1. Lexical analyzer
  2. Parser
  3. Table manager
  4. Type checker
  5. Translator
  6. Abstract machine


Practice exams


Lecture summaries

  1. [1/7/05] We looked at the anatomy of a compiler.

  2. [1/10/05] We discussed Tee diagrams, cross compilers and bootstrapping a compiler. Then we began looking at chapter 2, on lexical analysis. We looked at a simple finite state machine and translated it into a function that reads a lexeme and returns a token.

  3. [1/12/05] We continued on chapter 2, lexical analysis. We did more examples of finite state machines, and defined regular expressions, which have the advantage that they can yield smaller descriptions of sets of strings.

  4. [1/14/05] We finished lexical analysis by looking at Flex, a tool that builds lexical analyzers for you.

  5. [1/17/05] Holiday.

  6. [1/19/05] I answered questions about lexical analysis, and then began to look at Chapter 3, on context-free grammars.

  7. [1/21/05] We continued to look at context-free grammars. We defined derivations, including leftmost and rightmost derivations, and defined parse trees. We did an example involving expressions.

  8. [1/24/05] We defined ambiguous and unambiguous context-free grammars, and wrote both an ambiguous and an unambiguous grammar of simple expressions. We defined the parsing problems, and did an example of a predictive parser.

  9. [1/26/04] We are now looking at Chapter 4 on top-down (predictive) parsing. The basis for predictive parsing is a predictive parsing table. Grammars that can be parsed using a predictive parser with one lookahead token are called LL(1).

    This chapter starts by looking at recursive descent, which is a way of implementing predictive parsing in a programming language. Then it talks about LL(1) grammars, and shows how to modify grammars to make them LL(1). In section 4.3 discusses how to derive the predictive parsing table.

    Unfortunately, you cannot implement a recursive descent parser until you have created the parsing table, and you cannot define what an LL(1) grammar is until you have a definition of the parsing table. So I will cover this material in a different order. We will see the basic mechanism of predictive parsing, and how the table fits into it. Then we will see how to construct the parsing table, and will define LL(1) grammars. Then we will cover how to use recursive descent to implement a predictive parser, based on the table. Then we will see how to modify a grammar that is not LL(1) so that it is LL(1).

    This lecture we defined the First and Follow sets, and gave an algorithm for computing the First sets.

  10. [1/28/05] We gave an algorithm for computing the Follow sets, and for computing the predictive parsing table. We defined an LL(1) grammar, which is a grammar that has at most one entry per slot in the predictive parsing table.

  11. [1/31/05] We looked at using recursive descent as a way of implementing a predictive parser.

  12. [2/2/05] We continued to look at recursive descent by doing another example. We did some work on modifying a grammar to make it LL(1).

  13. [2/4/05] We looked at how to modify a grammar to make it LL(1), including removing left recursion.

  14. [2/7/05] We introduced the idea of a bottom-up parser, and contrasted it to a top-town parser. The bottom-up parser gets more information before it needs to make a decision.

    Bottom-up parsers are the subject of Chapter 5 of the text.

  15. [2/9/05] We looked at shift-reduce parsers, and how one can be controlled by a finite state machine that reads the stack from bottom to top.

  16. [2/11/05] We looked at the concepts of an SLR(1) parser, based on LR(0) items. We saw how to compute the closure of a set of LR(0) items, and how to construct a finite state machine that reads the shift-reduce parsing stack and tells the next action to perform, based on the lookahead token.

  17. [2/14/05] We saw how to construct a parsing table using the SLR(1) finite state machine, and how to use the table to control a shift-reduce parser.

  18. [2/16/05] We did another example of an SLR(1) parser.

  19. [2/18/05] We did yet another example of an SLR(1) parser, and looked at a weakness of the SLR(1) idea, and an idea of making better use of the lookahead.

  20. [2/21/05] We looked at canonical LR(1) parsers and LALR(1) parsers.

  21. [2/23/05] We went over homework and looked at the yacc parser generator. We saw how to design a simple compiler by attaching semantic actions to the productions. The semantic actions are performed when the reduce actions are done during the parsing process.

  22. [2/25/05] We looked more at use of a parser generator.

  23. [2/28/05-3/11/05] We looked at attribute grammars, and how to use them to specify semantics. We looked at the symbol table, and how to use attributes to make symbol table entries and to do type checking.