The final exam will be Wednesday, May 8 from 5:00pm to 6:15pm in Austin 202.
Exam 2 is available. Answers to exam 2 are also available.
Please turn in your final Pascal compiler as assignment 5 using handin. Turn in your Astarte modification by email, with an explanation of what you did. Turn in only the files that you modified.
See the syllabus for a description of the course.
Office hours are MW 2:00-3:15pm and TTh 11-12:15pm or by appointment. My home page is www.cs.ecu.edu/~karl/.
Please adhere to the guidlines in the short checklist for writing programs.
A short tutorial on Solaris is available.
A Bison manual in Postscript form is available. See here for Bison documentation in other forms.
A Flex manual in Postscript form is available. See here for Flex documentation in other forms.
You will be compiling into the byte code for an abstract machine. The following links provide a description of the abstract machine and implementations of an interpreter, assembler and disassembler for the abstract machine.
You can get to the lab computers using ssh and sftp. Machines that allow ssh access are alfalfa.csci.ecu.edu, buckwheat.csci.ecu.edu, darla.csci.ecu.edu, porky.csci.ecu.edu, spanky.csci.ecu.edu and stymie.csci.ecu.edu.
Ssh for Windows is available from ftp://ftp.ssh.com/pub/ssh. Get SSHWinClient-3.1.0-build203.exe. It is a self-extracting executable. Just run it and it will install itself.
[1/7/02] We went over compilers, including T diagrams and the anatomy of a compiler.
[1/9/02] We began looking at the example translator that is described in Chapter 2 of the text. We wrote a grammar for simple expressions, and saw how semantic rules could be attached to the grammar rules to perform the translation. This is called syntax-directed translation. We used two forms of rules. One, a functional approach, computes an attribute for each node in a parse tree. The attribute of the root node is the translation. The second approach, an imperative approach, performs an action at each node in a parse tree. The second approach is generally simpler and smaller when it works, but is less flexible and sometimes less easy to understand than the first approach.
[1/14/02] We started looking at lexers carefully. A lexer can be built by creating a finite state machine and translating that machine to a program in a fairly mechanical way. Lexers produced this way tend to be efficient, easy to write and easy to modify. This material is in Chapter 3 (sections 3.1, 3.4 and 3.5.)
[1/16/02] We looked at using flex to create a lexer. We looked at a flex lexer and also part of the lexer produced by flex. We looked at the structure of a boiled down version of the lexer. Then we began looking at context-free grammars.
[1/23/02] We went over the homework. We looked at grammars and properties of grammars. We saw how ambiguity can sometimes be removed. We defined derivations and sentential forms.
[1/28/02] We looked at table-driven top-down parsers and parsing tables. We defined the FIRST and FOLLOW sets for a grammar, and saw how to compute them.
[1/30/02] We turned our attention to bottom-up parsers. We will concentrate on shift-reduce parsers. We did an example of a shift-reduce parse, and saw how it (a) produces a rightmost derivation backwards, and (b) produces the parse tree from the bottom to the top. We did not see how the decisions that a bottom-up parser must make can be made. Bottom-up parsing is introduced in Chapter 4.5 of the text.
[2/4/02] We continued to look at shift-reduce parsing. The text covers precedence parsing, but it is very restrictive in the kinds of grammars that it can handle, and is not used much. We began to look at LR parsers, which rely on using a finite state machine that recognizes viable prefixes. The finite state machine reads the contents of the parser stack, from bottom to top, and tells the parser what action to perform based on the state of the finite state machine when it reaches the top of the stack. To avoid having to do the same computation of the finite state machine over and over, we memoize the states that the machine reaches in the stack itself, as a second "rail" of the stack.
[2/6/02] We looked at LR(0) items and the SRL(1) algorithm for constructing the LR parsing table.
[2/11/02] We did more examples of SLR(1) parsing.
[2/13/02] We looked at the Bison parser generator. See the documentation that is available from the course web page.
[2/18/02] We did examples of using a Bison parser to perform syntax-directed translation. We redid the conversion from infix notation to postfix notation in two styles. The first, an imperative style, prints out the translation as it goes. The second, a more functional style, produces a syntax tree as an attribute. Then it has a semantic stage where the syntax tree is examined and converted translated. The imperative style is shorter when it works, but is less flexible than the syntax tree approach. This material is in Chapter 5 of the text.
[2/20/02] We looked at computing attributes during parsing. Synthesized attributes are usually easy to compute, and there are ways to compute some kinds of inherited attributes. We say how to convert some inherited attributes to synthesized attributes by changing the grammar. In general, you can get rid of inherited attributes by moving computations to a higher level in the parse tree, and using synthesized attributes to send information up the tree to the place where those computations are done. We saw how to build a syntax tree as an attribute.
[2/25/02] We went over the next assignment, managing types and tables for the compiler, and briefly discussed the assignment of modifying the Astarte compiler.
[2/27/02] Midterm exam 1.
[3/4/02] We looked in more detail at the production of syntax trees in parsers, and are now prepared to look at translation from the syntax trees.
[3/6/02] We began looking at translation of syntax trees into intermediate codes and other codes. The text uses a kind of intermediate code called a three-address code. We will translate into a byte code for a stack-based machine, which does not require register allocation or storage of temporaries, and so is a little easier to manage.
[3/11/02] Spring break.
[3/13/02] Spring break.
[3/18/02] We discussed generation of code for a stack machine. We looked at generating code for expressions. Please select projects to do for modifying the Astarte compiler and let me know what you would like to do by about the end of this week.
[3/20/02] We discussed generating code for boolean expressions, where the expressions need to jump to given addresses rather than generating code to put the value of the expression on the stack.
[3/25/02] We finished talking about intermediate code generation and looked more at type checking and type inference.
[3/27/02] We talked about optimization and code generation, including breaking a program into basic blocks, and general strategies for code generation.
[4/1/02] We talked about the translator assignment, and how the translator should work.
[4/3/02] We talked about the anatomy of the Astarte compiler, and what would be involved in modifying it.
[4/8/02] We talked about data structures and mistakes that students had made in their data structure assignments.
[4/10/02] We talked about register allocation, including simple (ad-hoc) methods and register allocation by graph coloring.