CSCI 4630, Spring 2001
Last modified: 5/2/01
Announcements
Phil Cogbill has a site for this class,
cs4630.3iem.net, that has
some interesting links.
Syllabus
This is a course on the foundations of operating systems.
Please see the
syllabus for details.
Office hours
MW 12:00-2:00, TTh 5:00-5:30
Assignments
Assignments will be posted here as they become available.
- Homework assignment 1 (due 1/23/01)
- Homework assignment 2 (due 1/30/01)
- Programming assignment 1 (due 2/20/01)
- Homework assignment 3 (due 2/13/01)
- Homework assignment 4 (due 2/20/01)
- Programming assignment 2 (due 3/20/01)
- Homework assignment 5 (due 3/7/01)
- Homework assignment 6 (due 4/10/01)
- Programming assignment 3 (due 4/24/01)
Quizzes
Quizzes and practice quizzes will be posted here as they become
available.
Checking your grade
If you have obtained a password then you can
check your grades
as I have them recorded. You can obtain a password from me.
To obtain one by email, send me email indicating the password
that you want. Choose a string of 4 or 5 letters/digits.
Summary of material covered
- [1/9/01] We covered most of chapter 1 of the text, on the
history of operating systems and their overall architecture.
- [1/11/01] We discussed chapter 2 and part of chapter 3 of the
text. This part is an overview of the structure of operating systems.
- [1/16/01] We finished chapter 3 and began looking at chapter
4. We discussed the fork and execv Unix system calls.
- [1/18/01] We continued to look at chapter 4. This is the
beginning of a long look at processes and concurrency. I did
an example of
concurrent processing, using fork, execv and wait.
- [1/23/01] We discussed inter-process communication (IPC), which
is discussed in chapter 4 of the text. IPC
mechanisms include:
- pipes and communication through the file system (see
example);
- shared memory
- message passing (MS Windows, Mach);
- remote procedure call;
[1/25/01] We finished covering chapter 4 on processes, and
began chapter 5 on threads. A thread is a component of a process that
represents an execution point, or a "thread of control".
Think of a thread as an active worker. In older operating systems,
each process had one thread. In more modern operating systems,
each process can have one or more threads. The threads in a process
share the resources of the process, such as memory and open files,
but each thread has its own register values and its own run-time stack.
[1/30/01] We discussed more material from chapter 5 of the
newer text on threads,
and began looking at the scheduler. There is material on threads
in chapter 4 of the older text.
[2/1/01] We had the first quiz and finished talking about
scheduling. We are now ready to begin looking at synchronization.
[2/6/01] Synchronization is any communication whose primary
purpose is not to communicate information between processes.
A fundamental synchronization problem is the critical section
problem. We discussed algorithms for the critical section problem that
use busy-waiting and shared memory.
[2/8/01] We discussed more approaches to the critical section
problem. Having more memory primitives, such as test-and-set and
atomic swap, make writing solutions to the critical section problem
easier. Semaphores make things easier still. If semaphores are
implemented using process queues, they can both be made fair and can
get away from busy waiting. Instead, a process waits in a queue associated
with the semaphore.
Semaphores require that one process wait for another to complete
a job. They are a kind of lock, allowing a process to lock a
resource (a critical section, a data structure, or anything else
associated with the semaphore).
We also talked briefly about wait-free synchronization. A wait-free
algorithm never requires one process to wait for another one.
I asked you to implement a wait-free counter.
[2/13/01] We continued to discuss synchronization. We looked
at wait-free synchronization and at synchronization primitives in
languages, such as monitors. Java classes behave very similarly to
monitors, if you use synchronized methods.
[2/15/01] We looked at deadlock and resource allocation.
Although deadlock can be a serious problem, and there are some
methods available for preventing it, most operating systems
to not attempt to prevent deadlock.
[2/20/01] We looked at the use of daemons to control resources,
and thus to reduce the possibility of deadlock. We began looking at
memory management.
[2/22/01] We began to study memory management. We looked at
contiguous allocation and fragmentation problems, and a few basic
memory management strategies. We began to look at paging.
[2/27/01] We looked at paging and how paging can be managed.
The problems to be overcome is that the obvious paging algorithms are
both too slow and use too much memory. Two-level page tables
decrease the memory requirements, but slow the algorithm down.
Using a level 1 cache to avoid address translation and a translation
lookaside buffer to speed up address translation when it is required
greatly increase the speed, and compensate for the slower translation
algorithm.
[3/1/01] We looked virtual memory, and began looking at
page replacement algorithms. Virtual memory relies on keeping some
pages on disk instead of in main memory. A page replacement algorithm
must decide which page in physical memory to replace when a page
needs to be brought in from disk. The optimal algorithm is off-inline: it
presumes that you know the sequence of page accesses in advance.
The least-recently-used algorithm is more realistic, since it is
on-line: it does not need knowledge of the future, only of the past.
[3/6/01] We continued looking at virtual memory, including
more on page replacement algorithms and ways of allocating page frames
to processes, including the working-set approach and the page-fault-frequency
approach.
[3/8/01] We finished looking at virtual memory and
memory management.
[3/20/01] We began to look at file systems. We covered
fundamental concepts, including the need to separate the file system
from low level device management and directory structures.
[3/22/01] We continued to look at file systems, including
security information and how files are stored on a disk.
[3/27/01] We looked in more detail at how files are stored
on a disk.
[3/29/01] We had quiz 3.
[4/3/01] We discussed free space management and
file allocation strategies in BSD Unix and
Windows NT.
[4/5/01] We looked a bit file system issues, including
RAID, disk scheduling algorithms and bad block management.
[4/10/01] We began looking at networking. We discussed
network protocols and network topologies.
[4/12/01] We looked at communication via sockets.
Remaining lectures concerned networks and security.