The final exam will be on Thurday, May 8, at from 8:00pm to 9:15pm in Austin 304. Note that this is the last day of final exams. If anyone cannot take the exam at that time, please contact me.
Your second midterms will be available to be picked up on Thursday, May 1, in my office. I will be in that afternoon.
Announcement on getting abstract machine to run. Please follow these modified instructions. Set up a base directory (say, basedir). Create a subdirectory called ast. Add a file standard.aso of standard things. Here is a small example of one. Assemble it. (Here is a pre-assembled version.) (The source code used to generate an approximation to standard.s is Here. The result required some modification.)
Copy messages.tar into basedir and untar it. You will get a directory called messages. It contains information that the interpreter needs to use. Now you can run your compiled programs. If you store test.s and the assembled version test.aso in basedir and then, with current directory set to basedir do
astr -b . test.asoyou should get character r printed, followed by a newline. That is all this program does.
The assembler is available as source code.
The abstract machine is available now. It is only implemented right now on the Sparcs in the lab right now because Brian keeps threatening to remove the dual boot PCs.
When you write your code, indicate version 42. So your program will have line
To assemble a program called prog.asm, and to write the result into file prog.aso, use command
ast_assemble prog.asm prog.asoPlease use extension .aso on your binary file.
To disassemble prog.aso, and put the result into prog.asm, use command
ast_disassemble prog.aso prog.asm
You must have a standard library file called standard.aso. The package names of standard.aso must be "standard" and "Standardimp". This file is automatically read each time a program is loaded. The assembler version of this should begin
.package standard Standardimp .version 42In this package, write any definitions that you want to make available to everybody. Put standard.aso in some directory, which I will call homedir here.
To run prog.aso, use
astr -b homedir prog.asowhere homedir is the directory where you have written file standard.aso. You can create a script with this command inside it to avoid having to write homedir each time you run a program.
You can ask the interpreter to show the instructions that it is executing. Command
astr -b homedir -D prog.asowill show each instruction that is performed. Other options include -DENV (show the environment at each step) and -DTYPE (show the type stack at each step).
Commands ast_assemble, ast_disassemble and astr are in /usr/local/bin. Do not try to run any of these on the server csci00.csci.ecu.edu. They will not run.
See abstract machine for a description of the abstract machine that your compiler will target.
See t_genglob.c and t_gentype.c for examples of code to generate the preamble of a definition.
See the syllabus for a description of the course.
Please adhere to the guidlines in the short checklist for writing programs.
A short tutorial on Solaris is available.
A Flex manual in Postscript form is available. See here for Flex documentation in other forms.
A Bison manual in Postscript form is available. See here for Bison documentation in other forms.
/export/stu/classes/csci5220/bin/handin csci5220 1 minnie.lex
Ssh for Windows is available from ftp://ftp.ssh.com/pub/ssh. Get SSHSecureShellClient-xxx.exe where xxx is the version. It is a self-extracting executable. Just run it and it will install itself.
[1/7/03] We went over the syllabus and discussed the overall structure of a compiler.
[1/9/03] We began writing a translator from infix expressions to postfix expressions. The approach was to begin with the semantics of the translation, given by a collection of rules of inference. We then introduced a little bit of syntax, rewriting the (semantic) rules of inference into a syntactic form. Our first syntactic form failed to specify precedence rules. We improved on the syntactic form by altering the syntactic rules. That gave a complete description, but yielded a difficult parsing problem. We modified the syntactic rules again to make the parsing problem easy to solve in a predictive fashion, where it is possible to decide what to do next by looking only at the next character. We produced a table that indicates what to do in every possible situation.
[1/14/03] We continued on the translator from infix to postfix. We need to take the syntactic and semantic rules plus the parsing table and produce a program. We will produce two versions. The first will follow the rules exactly, building up the translation as a string. The second will take advantage of a characteristic of the relationship between the syntactic rules and the semantic rules to produce a translator that writes the translation a little at a time instead of building it up as a string.
[1/16/03] We looked at lexical analyzers (lexers). A lexical analyzer can be created by drawing a diagram of a finite state machine, and then writing a direct implementation of the finite state machine. The process is very mechanical -- so mechanical that it can be automated. Lex and Flex are examples of lexer generators, which take a description of a set of lexemes and rules for how to process them, and produce a lexical analyzer. We will use Flex to produce lexical analyzers.
[1/21/03] Today was a Monday according to the University calendar. We did not meet.
[1/23/03] Snow day. No class.
[1/28/03] We talked about parse trees, syntax trees and context-free grammars. We looked at the role of a parser in converting a sequence of tokens into a tree that shows the structure. An unambiguous grammar clearly defines the parsing problem. An ambiguous grammar can still define the parsing problem if you add additional disambiguating rules, such as precedence rules.
We discussed the compiler assignment briefly. See the language description. Get file tokens.h, and build a lexer for this language.
[1/30/03] We began discussing the parsing problem. To major families of parsing algorithms are the top-down predictive parsers and the bottom-up parsers. We will begin to study them, with more emphasis on bottom-up parsers. We began to look at top-down parsers, and defined the FIRST and FOLLOW sets. We saw how to compute the FIRST and FOLLOW sets. See p. 188 of the Dragon Book.
[2/3/03-2/27/03] We covered the following topics in February.