CSCI.4210 Operating Systems Fall, 2009 Class 25
Operating System Design

Here is a description to the SSL vulnerability that several students have mentioned in class

A Vision of Computing From Microsoft's Future Thinker
CNN (12/03/09) Voigt, Kevin

Over the next 10 years, how people interact with computers will evolve drastically, with hand gesture controls becoming as common as keyboards, and file selection being determined by eye scans instead of mouse movements, predicts Microsoft chief research and strategy officer Craig Mundie. "Today, most people's interaction is through a screen--whether they touch it, type it, point or click, it's still just graphical user interface," Mundie says. "While that's very powerful and has a lot of applicability, I think it will be supplemented in dramatic ways by what we call a natural user interface." He says computers will soon be able to emulate the human senses of sight, hearing, speech, touch, and gesture, and combine them in multiple ways for people to interact with machines. The interactivity revolution will be fueled by new multiprocessor computers, which are expected to be widely available by 2012. Mundie says these new processors should provide a major performance gain, with some performances increasing by a factor of 100. One of the first major commercial applications of the new interface technology is expected to be released next year when Microsoft launches its new line of Xbox gaming consoles, which will completely eliminate the need for handheld controllers. Mundie says the new gaming interface enables players to move and use gesture controls, with the system calculating in real time the angular position of the 22 major joints in the body. Mundie envisions a day when users will simply be able to talk to their computers about solving problems. "You should be able to describe the problem or the policy you want and the computer should be able to somehow implement that," he says.

Read the full article here

The Nature of the Operating System Design Problem

Designing Operating Systems presents some unique obstacles, not encountered by designers of many other kinds of software systems.

Designing an Operating System

Most of Chapter 13 in the text discusses the general problem of designing large systems. This chapter is sort of a jumble of more or less disjoint topics, and I found it to be less informative than most of the rest of the book.

First, it is full of platitudes. A statement like
Everything should be as simple as possible, but no simpler
may be an amusing quip, but it is not very helpful to people who are actually designing operating systems or user interfaces.

One exception to my comment is Tanenbaum's first law of software
Adding more code adds more bugs
(and I would add that adding more code adds more security holes). Many commercial software developers seem to think that there is really no harm in adding lots of features that most people will never use, or which do not really provide any new functionality. However, each new feature has the potential of introducing new bugs or security holes, and so developers should only add features which actually improve functionality.

Second, it is easy for an academic to talk about general principles of good design, as if operating system designers were starting from scratch. In fact, it is unlikely that anyone in this course, or indeed anyone anywhere, will be designing a brand new operating system. All of the operating systems in use today have been around for a while, and evolved from earlier, simpler operating systems. The longer a system has been around, the more features have been added and the messier the design is. This evolutionary process will almost certainly continue, at least for commercial operating systems; an academic may design a new system from scratch to demonstrate some principles, but such a system is unlikely to get much attention from the non-academic world.

System structure

There are a number of different overall operating system structures.

There are several concepts that he discusses which all computer scientists should understand. Here are two.

Binding Time

The concept of binding time is crucial to the understanding of computer science. The overall concept is understanding when decisions get made about memory addresses, screen locations, etc. There is a trend toward later binding times both in language design and in operating systems.

The concept of binding time originated with program language design. When a language declares a global variable, such as int x or char buffer[100], its memory size is determined at compile time. Its actual memory location, expressed as an offset from the start of the memory segment, is also determined at compile time. This is called early binding.

However, local variables, also known as automatic variables in C, i.e. those which are local to a particular function, are only instantiated when the function is called. Their memory location is on the run time stack, and so it is not determined until run time. This is called late binding.

Here is another very clear and simple comparison of early vs late binding from the programming language realm. There are two ways that a programmer can declare an array of 100 chars.

char buffer[100]

or

     int n;
     char *buffer;

     n = 100;
     buffer = (char *)malloc(n); /* or buffer = new char(n) in c++ */

The former method is an example of early binding because the size of the buffer is determined when the program is compiled. The latter is an example of late binding because the size of the buffer is not determined until the program is actually running. The former is simpler, but the latter provides more flexibility.

An example of late binding in the operating system arena is placement of windows on the terminal. This decision is typically made dynamically by the operating system or window manager and is not determined until it is actually time to display the window on the screen.

A variant of this is called lazy creation. If an object is expensive to create, we want to avoid creating it until it is actually needed. If it is never needed, we avoid the overhead of creating it. For example, a program which uses a lot of windows may not actually create the windows until they are needed. The alternative is to create all of the windows when the program is loaded. With lazy creation, the program starts faster, but there may be a small performance penalty whenever a new window is created.

Yet another example involves the issue of static vs dynamic data structures. A static structure is of fixed size, and this size is set when the OS is compiled. Static structures are examples of early binding. Dynamic data structures grow (and sometimes shrink) as the need grows and shrinks. An example is the run time stack for a particular program or for the kernel. Another example would be to store kernel data in a linked list as opposed to a fixed sized array.

Reentrant functions

A function is reentrant if more than one instance of it can be running at the same time. For example, a thread function must be reentrant. In order to be reentrant, each instance of the function call must have its own stack area for variables, and there can be no global variables.

Many kernel functions must be reentrant. For example, on any system in which an interrupt can be interrupted by another interrupt, each interrupt handler must be reentrant.

Future Operating Systems

The entire history of computing is fraught with surprise developments, which makes it a little dicey to predict the future of operating systems, but I'll try anyway.

Internet Scale Operating Systems

It is a fairly safe bet that computing will continue to become more distributed. An article in the March 2002 issue of Scientific American (The Worldwide Computer by David P. Anderson and John Kubiatowicz) describes an Internet Scale Operating System in which many PCs all over the world in essence form a single OS. Whenever a particular PC is not being used by its owner (which is most of the time), it can be used as a compute server to do someone else's computing. Unused disk space can be used to store encrypted fragments of files belonging to other people. When the user sits down at the computer to work locally, the remote jobs are quietly migrated to some other unused computer.

A company called G.ho.st is developing a full Internet Scale Operating System. Users have accounts, a desktop a remote, distributed file system, a suite of applications (word processors, spreadsheet, email, etc.). All you need is a browser.

This company has gotten a lot of publicity, not because of its product, but because it is a collaboration between Israelis and Palestinians.

Another example, not yet public, is Chrome OS by Google. Like G.ho.st, Chrome OS will offer a complete set of services (word processing, spreadsheets, a file system, audio players, video players, email, messaging,

These are based on the observation that most people only use their computer to browse the Internet. The idea is that people can buy a cheap netbook (perhaps one with no hard drive). All it needs is a browser, and you use this browser to connect to the web OS. Nothing is stored on your computer.

Microsoft is also getting into Cloud Computing with Windows Azure, a cloud services Operating System that helps Web Developers, Corporate Developers, and Systems Integrators to build new applications in the Cloud. The Azure Services Platform is hosted in Microsoft data centers and provides an operating system and a set of developer services. It is an open architecture, so it gives developers the choice of building web applications, applications running on connected devices, servers, PC's or hybrid solutions offering the best of online and on-premises.

gOS is yet another entry into an Internet OS.

Here are some of the advantages of a web based virtual OS.

Here are some concerns about Cloud Operating Systems

Distributed Files

Distributed compute services is just one example of ISOS, a second is distributed files. Fragments of large files, such as videos, could be stored on many different remote sites. This has the advantage that there can be a high degree of redundancy. Each fragment can be stored in a number of places so that if any one disk crashes, no data is lost, and there is no need to do systematic backups; the redundancy is built into the file system.

BitTorrent is a good example of this. BitTorrent is a Peer to Peer (P2P) Protocol for sharing very large files such as movies. P2P protocols are a major new development in the web. Traditionally, if someone wanted to get a document from the web, they would connect to a known web site and download the document using http. In P2P every host is both a client and a server. A particular host may be a client at one point in time, connecting with another host to download information, and a moment later it may be a server, dispensing information (perhaps the same information that it just obtained) to another host.

BitTorrent is a protocol developed by Bram Cohen to facilitate the sharing of very large files. Suppose someone has a video to share, and ten peers would like to download it. The server would be a major bottleneck. With a traditional client server protocol, the server would send the file first to one peer, then to another, and this would be very slow. To solve this problem, the BitTorrent node breaks the file down into many small pieces (typically each piece is about 250KB). Peer1 might get chunk 32, Peer2 might get chunk 20, Peer3 might get chunk 14, Peer 4 might get chunk 41 etc. After the first round, each peer becomes both a downloader and an uploader. Peer 1 could get chunk 41 directly from Peer 4, Peer 2 could get chunk 32 from Peer 1, etc. At the same time, the server would continue to distribute new pieces to the peers. This is a dramatically faster way of getting large files out to many peers.

This process continues until each peer has the complete file.

The many different peers which are downloading (and uploading) the pieces of the file more or less simultaneously is called a swarm, and the original distributor of the file is called the seed.

Paradoxically, the more popular a file is, the faster each peer can download it because there are more peers in the swarm, and each has some pieces of the file to share.

BitTorrent is open source; there are a number of implementations. There are a number of BitTorrent search engines on the web.

Next Generation User Interfaces

The ideal OS is one that we never have to think about, an obedient Victorian-era servant who knows our wishes better than we do, takes care of all the petty details, never demands anything of us, and recedes transparently into the background except when called upon. In recent years, we've made great progress toward that goal. Wizards, platform-independent file handling, network access, intelligent agents, and multimedia capabilities will bring the OS of the future closer to that ideal.

The switch to a WIMP GUI (Windows, Icons, Menu, Pointer Graphical User Interface) from a command line, text based interface which occurred about 25 years ago was a major breakthrough in Operating Systems. However, some people think that the current Desk Top metaphor is nearing the end of its useful life. The problem, as with a traditional desktop, is clutter; there is only so much information that a monitor can display.

People are beginning to look at three dimensional representations of the desk top on the assumption that a 3-D image can store more data in the same number of pixels than a 2-D image.

Some of the work with Next Generation User Interfaces involves artificial intelligence, i.e. trying to guess what users want before they finish specifying what they want. Microsoft software incorporates some of this, with word completion, and the annoying (and since abandoned) Clippy, which pops up and says "It looks like you're writing a letter".

IBM is carrying this to the next level with a project called BlueEyes which senses what the user is about to do by using eye movements and other physical features. The assumption is that as computers become more invisible, pervasive, and location/context aware, a user interface paradigm shift from explicit user control to implicit user control will occur.

Augmented Reality

The next conceptual leap in user interface design is augmented reality, in which computer generated content is intermixed with reality and what you are sensing.

As you walk or drive around, information about your surroundings is sent to you through a device. If the device has a GPS and a camera, it can show information about restaurants or other sites in the area, projecting this onto the device's screen. If you are looking for a job, you can point the device to a building, and it can tell you which companies in that building are hiring.

The actual device might be some kind of goggles

or other head mounted device


or a cell phone


or a more specialized device like this.


Augmented reality has lots of military and scientific uses as well.

Once face recognition software gets a little better, as you walk around campus, the names of students that are approaching you (and perhaps other information as well) can be be displayed on your device.

From this, it is a relatively short step to storing all of this information. It is predicted that in a few years, everything that you say and do will be recorded and stored. it will be searchable and retrievable.

Advances in CPU architecure

There are two recent advances in CPU architecture which have the potential for major changes in Operating System Design, but which have not yet been fully exploited. One is the use of mulitple cores (i.e. multiple CPUs) on a single chip. We discussed some of the OS implications of this earlier, but to date, traditional operating systems such as windows and linux do not make full use of this because implementation is so complex.

Processors are moving from 32 bit to 64 bit word sizes, and memory addressing either has already been or will be moving to 64 bit address spaces. This introduces some new possibilities. For example, it is possible to keep the entire file system in memory (at least in virtual memory).

Operating systems for different platforms

This course has only talked about operating systems for more-or-less traditional computers, but in the future, there will be computers in a much wider variety of appliances. In a few years it will not be far fetched to talk about the operating system in your car or even in your refrigerator.

The best example of this is smart phones. In olden days, telephones were used to make phone calls, but modern cell phones have a complete operating system inside.

One example is the Symbian OS. (See Chap 12) This runs on the ARM Architecture, a 32 bit risc family of processors manufactured by ARM Holding company. In some ways it is similar to the operating systems that we have discussed in this course. It has a kernel, programs run in user space, it has device drivers, a file system modeled after the Microsoft FAT-32 file system, and it supports multiple concurrent processes and threads, with a preemptive process/thread scheduler.

Symbian does not support true virtual memory. There is an MMU (Memory Management Unit for those of you who have forgotten week 5) which maps logical pages to physical pages, but processes are loaded into contiguous memory, and must be completely loaded before the process runs. This limits the size of a process, but it also ensures that there will never be page faults.

Symbian uses a microkernel architecture. A microkernel is a minimal computer operating system kernel which, provides no operating-system services at all, only the mechanisms needed to implement such services, such as low-level address space management, thread management, and inter-process communication (IPC). The microkernel is the only part of the system executing in a kernel mode. The actual operating-system services are provided by "user-mode" servers. These include device drivers, protocol stacks, file systems and user-interface code.

The Symbian OS is completely object oreiented. For example, instead of calling open to return a file descripter, Symbian creates a file objects and calls the open method.

Security is an important issue with smartphones. These are single user devices that do not require login. Installing applications requires authorization but no authentication. The system asks the user for permission to install any new app. (a defense against viruses, worms)

Each software developer is now responsible for verifying their own software through a process called signing. The developer obtains a certificate from a trusted third party. Once the software is complete, it has to be submitted to a trusted third party to confirm that it does not do anything malicious.

Android

Google recently introduced Android, an open source operating system for mobile devices. The kernel is linux based (2.6). This provides core system services such as security, memory management, process management, network stack, and device drivers, but otherwise it does not look much like linux.

Here is the Android Architecture.

The bottom layer is the Linux kernel.

The next layer is the libraries. The components include

All apps have to be written in java, but instead of being compiled into Java bytecode and the JVM (java virtual machine), it compiles into a virtual machine called dalvik, which is similar in concept, optimized for a minimum memory footprint (.dex) and minimizing battery life.

Every app runs in its own process (supports threading) with its own instance of the Dalvik VM

The next layer is the Application Framework