CSCI.4210 Operating Systems Fall, 2009, Class 1
Introduction, History of Operating Systems

What is an operating system
The term Operating System developed because it was a software system to replace the computer operator.

History of Operating Systems

The history of computing and the history of operating systems are intimately intertwined

One of the best ways to understand why an operating system is important is to get an understanding of what computers looked like before modern operating system concepts were developed.

The earliest computers, such as the ENIAC of the 1940s, had no programming languages, not even assemblers, and did not even have stored programs. They were programmed by setting switches or plugboards similar to early telephone switchboards for each run.


The ENIAC (Electronic Numerical Integrator and Computer). Until the ENIAC, the word computer referred to the human, not the machine.

A major advance in operating systems was the introduction of the punch card in the early 1950s. This permitted users to write programs on cards rather than by setting switches or plugboards. Shortly thereafter, the first successful high level programming language, FORTRAN (now spelled Fortran) was developed by John Backus at IBM.

The earliest computers used vacuum tubes, which had a tendency to blow out. The invention of the transistor was a major advance.

However, throughout the 1950s, computers did not have operating systems as we know them. Users would submit programs by handing a box of punch cards to the operator, who would then feed the programs in consecutively. Program operations were determined by Job Control Language (JCL), and the output would be sent to a line printer. If the program was written in FORTRAN, the operator first had to load the FORTRAN compiler, often on another set of punched cards, or perhaps on a tape. Accounting was done by the operator looking at the clock on the wall.

For a brief period, computing was dominated by Univac but IBM quickly surpassed them. IBM had been in the punchcard based calculating business. IBM developed the first commercial computer the IBM 701 in 1953, and IBM researchers developed much of the hardware that made modern computing possible, such as the magnetic drum (a forerunner of the disk).

The overhead associated with a human operator loading each program became unacceptable, and so the operating system was born. Suppose you could have a system that could load a job,including whatever libraries were needed, run it, print the output, clean up, do the accounting, and load the next job.

Spooling (Simultaneous Peripheral Operation On Line) - jobs are read onto a tape, or drum and loaded sequentially, so the overhead is reduced (although jobs were still batch jobs).

A number of early computers (1950's) had primitive operating systems.

The IBM 360 - the first modern commercial operating system

A major breakthrough in operating systems was made by IBM with the introduction of the IBM 360 in 1964. This was a whole family of computers which ran more or less the same operating system. The operating system was more than a million lines of assembler code, and this was the largest piece of software ever written up to that time.

The IBM 360 series was the system that brought computing into the mainstream of the corporate world. Until its introduction, computing was a specialty niche; by the late 1960s, essentially all corporations had computer operations in place for inventory, payroll, accounting and myriad other purposes, and the vast majority of these functions were being done on IBM mainframes.


A typical IBM mainframe of the late 1960s

The IBM 360 introduced many of the important concepts of modern operating systems.

multiprogramming, in which several jobs could be in memory at the same time. Compare this to the batch computers prevalent up to that time, in which a single job was loaded and run to completion, followed by another batch job. One of the problems with batch loading is that if the program had to wait for some sort of I/O, the CPU was idle until the I/O operation was completed, and this was very wasteful. With multiprogramming in the IBM/360 series, at a particular point in time several jobs could be loaded into memory, and while one job was waiting for an I/O operation, another job could be running, thus making much more efficient use of the CPU.

Aside: Why is multiprogramming so important? I/O operations are thousands of times slower than CPU operations. A very common activity of real world computer programs is to read some data from a file, perform some calculation on this data, and then write the data back to the file. Here are some typical times.
Read a record from a file15 μ
Execute 100 instructions1 μ
Write a record to a file15 μ
TOTAL31 μ
Percent CPU utilization3%

It is easy to see that if the cpu could stop running this job and run some other job during the read and the write, CPU efficiency could be dramatically improved. Multiprogramming accomplishes this.

However, multiprogramming introduced a new set of problems which had to be solved. Memory was divided into a number of partitions, each of which could hold a separate job. The operating system needed some kind of protection mechanism so that a job could not accidently (or purposefully) access the memory of another job. Multiprogramming also made job scheduling (deciding which job to run next) much more complicated.

The System 360 was also the first commercial operating system to allow timesharing or multitasking in which the CPU executes multiple jobs seemingly at the same time by switching among them. This became particularly important as computers became more interactive. In the late 1970s, with the development of terminals, many users could connect to a computer simultaneously, and they could submit jobs directly rather than walking a box of cards to the computer room. The first terminals were teletype machines, where the user typed onto a piece of paper and the reply was printed on the same paper.


A teletype machine

These were replaced by terminals similar to what we use today except that the interaction was line oriented. Even on today's systems, an I/O port which reads and writes on a character basis is called a tty, which is short for teletype.

The Personal Computer Revolution

Throughout the 1960s and 1970s the term computer referred for the most part to main frames, huge computers, often water cooled, usually built by IBM, which sat in a computer center. This period also saw the development of minicomputers such as the Digital Equipment PDP-11 and Vax, and the development in the research community of the Unix operating system, but Unix did not have much of a commercial impact until nearly two decades after its initial development.

In the late 1970s and early 1980s, the personal computer, also known as the microcomputer, came on the scene, and revolutionized what computing was all about.

Ted Hoff the developer of the first microprocessor, worked for Intel was a Rensselaer graduate. The Intel 4004, an entire cpu on one chip (4 bit word) 1971. Followed a year later with the Intel 8008, and the 8080 a few years later (1975). The Altair 8800 manufactured by MITS was the first microcomputer. (No keyboard, no monitor, it couldn't do anything) But it could hook up to a teletype, and Bill Gates and Paul Allen wrote a basic interpreter for it.


Bill Gates and Paul Allen at Lakeside High, using a teletype to connect to a mainframe.

There were many makers Atari, Commodore, Radio Shack, later apple. (Apple II, 1977, was a big hit). Used for word processing, the first killer app was the spread sheet (visicalc)

The first PC OS which became more or less standard was the CP/M Gary Kildall, Digital Research.

IBM initially dismissed the PC as a toy, but in the late 1970s realized how important it would be.

When IBM decided to enter the PC market, they needed an operating system. They first went to see Gary Kildall, but apparently he refused to sign the extensive non-disclosure agreements that IBM required. The IBM people went to their second choice, Bill Gates, who ran a small software development company in Seattle which developed compilers. According to popular legend, IBM did this because a top IBM executive was a friend of Bill's mother. Not knowing any better, Bill Gates signed the non-disclosure agreements without really reading them, and after some negotiations, he agreed to develop the operating system for the new IBM personal computer. The key clause in the contract stated that he was free to sell his operating system to other companies as well as IBM. There was only one small problem. Bill Gates had never written an operating system. He knew of a company called Seattle Computer Products which had written an operating system similar to CP/M called QDOS (Quick and Dirty Operating System). It consisted of about 4,000 lines of code. Bill Gates bought this OS for $50,000, made some modifications, and presented it to IBM as MS-DOS (Microsoft Disk Operating System).


The IBM PC

The IBM PC was a phenomenal success. IBM planned to sell about 200,000 of their new PCs in the first year; they sold that many in the first month. The IBM PC drove almost all of the competition except Apple out of business. In 1984 Compaq came out with the first IBM clone, a computer which could run all of the IBM software seamlessly, and they could undercut IBM on price fairly substantially. This was why Microsoft's ability to sell MS-DOS to other manufacturers was so important. Many other companies soon developed their own clones, and by the late 1980's IBM was a minor player in the PC market. But every clone ran the MS-DOS operating system. Even today, some people still refer to a computer with an Intel processor running a Microsoft operating system as an IBM compatible.

I have always been puzzled by the fact that IBM, with thousands of programmers and vast experience with operating systems, had to go to Microsoft to buy an OS that a couple of kids had written in a few months. The closest thing to an explanation that I have seen is that IBM knew that it was too bureaucratic to get anything done in such a short period of time.

The only serious competitor to the IBM-PC and PC compatibles was the Apple macintosh (1984)

Networking

Some computers have been networked since the 1970s, in the 1990s, with the widespread use of the internet, essentially all computers were networked, at least part of the time. This has fundamentally changed the way that computers are used and thus has required changes in operating system design. Networking features are now built into essentially all operating systems, and increasingly, features of the operating system themselves are distributed. One example of this is a networked file system, in which files are distributed across several different computers, but this is transparent to the user.

Sun Microsystems had a slogan "The network is the computer", meaning that we should think of the entire network as a computer rather than just the box that happens to be on your desk. The big thing now is "Cloud Computing" in which the network is literally the computer. Rather than each person having application software such as a word procssessor and a spreadsheet on their computers, they access such software through the Web. Storage can also be networked; instead of storing your files on your own computer, they are stored somewhere out on the web in an undisclosed location. Google Documents is a good example of this.

Even operating systems can be distributed. We will discuss this in some detail later in the course.

Computer Hardware

A prerequisite for this course is a course on computer hardware, such as CSCI.2500 Computer Organization or ECSE.2610 Computer Components and Operations. However, many students can probably use a quick review of computer hardware concepts.

Here is a very much simplified view of a computer system.

A computer system has four main components

CPUs can usually run in one of two modes, kernel mode, also called privileged mode or supervisor mode, and user mode. The term kernel refers to the core of the operating system; the kernel provides low level services such as memory management, process scheduling, and basic hardware interaction. Needless to say, the kernel runs in kernel mode. A job running in kernel mode has access to all machine instructions and all kernel data structures. This is why kernel mode is sometimes called privileged mode. In contrast, application programs run in user mode. A job running in user mode cannot execute certain types of protected instructions, and kernel data structures, such as those which access the hardware directly.

There is usually a hardware flag which is set to indicate which mode the processor is running. Recall that the CPU itself does not know anything about the operating system or which job it is running; it just executes the next instruction. When an application program, running in user mode, needs to access a kernel service such as performing an I/O operation, it executes a system call, which is a hook into the kernel to perform the requested service. One of the first things that a system call does is change the flag so that it can run in privileged mode.

Intel CPUs actually support four different levels of privilege, but in practice, only the two levels described above are used.

Processors (CPUs) execute the fetch cycle:

  1. Fetch an instruction from memory
  2. Decode it to determine its type and operands
  3. Execute it
  4. Check for interrupts
  5. Go to step 1

All modern processors achieve speedup by pipelining, i.e. executing steps in parallel so that mulitple instructions are being executed at the same time. This is done by having separate fetch, decode, and execution units on the CPU.

Modern CPUs can achieve even more speedup by having multiple execution units, and the latest development in processor design is the multicore chip; that is, two or more complete CPUs on a single chip.

CPUs have a number of special purpose registers.

Interrupts

Modern operating systems are interrupt driven so it is important to understand what an interrupt is and how it works. An interrupt is a hardware mechanism to allow other modules to interrupt the normal flow of the processor. After completing each instruction, the CPU checks a flag to see if any interrupts have occurred, and if this flag is set, instead of executing the next instruction in the current process, it switches to an interrupt handler. Interrupts can be caused by a number of different types of events. The most obvious ones are I/O events such as the user hitting a key on the keyboard, or moving the mouse or a read from the disk completing, but other events which can trigger interrupts are division by zero or a memory access violation.

Hardware interrupts are generally controlled by a special chip called the interrupt controller, such as the Intel 8259. This is a part of the chip set provided on the motherboard. Input/output devices are connected to this interrupt controller.

Here is a list of events which might cause an interrupt

The code for interrupt handlers is a part of the operating system. All interrupt handlers first have to save the values of all of the registers. They may have to switch from user mode to kernel mode. Then they handle the interrupt. In most cases, when the handler completes its work, it restores the values of the registers to their stored values and return control to the program. In some cases, such as a memory exception error or other fatal exception, control is not returned to the running program but rather is switched to some other process.

One issue which arises in writing interrupt handlers is that the handler itself may be interrupted. There are two approaches to solving this problem

Many systems allow an interrupt handler to mask interrupts so that the interrupt handler routine can run to completion without danger of being interrupted. In order to do this, the operating system needs a mechanism for queuing interrupts. When one interrupt handler completes, it checks to see if other interrupts have occurred and if they have, it switches to the new interrupt handler rather than switching control back to the formerly running process.

A second approach is to allow a higher priority interrupt routine to interrupt a lower priority interrupt routine. This means that the interrupt handler code may have to be reentrant. A routine is reentrant if the same code can be shared by several different users. This means that each instance of the routine must have its own local value of all variables, although the code itself is shared by all of the instances.

The interrupt handlers are an important component of the kernel, and typically run in kernel mode. They must be fast, because an operating system may process hundreds of interrupts each second. Essentially all switching from one state to another in the kernel is done with interrupts.

Pentium chips have 17 predefined interrupts. These tend to be serious exception conditions such as invalid opcode, general protection errors, floating point errors, or interrupts for debugging. There is room for 224 user defined interrupts, also called maskable interrupts. The operating system defines these in any way that it wishes. There is an INT (interrupt) instruction which allows a program to call any of the predefined or user defined interrupts.

The interrupt handlers are accessed through the Interrupt Descriptor Table (IDT). Each interrupt, whether predefined or defined by the operating system, has a number, which accesses the appropriate handler routine. There is an IRET (Return from Interrupt) instruction, which restores the registers for the process which was executing before the interrupt occurred, sets the mode flag if necessary, and returns control to the formerly running process.

Buses

A bus is a subsystem that transfers data, addresses, or control signals between components of a computer system. The simple schematic above had just one bus, but most modern systems have several different buses.

This system has eight buses.