The previous chapter gave you a pretty good idea about where Windows NT came from as well as some of its design objectives, such as scalability, portability, reliability, and compatibility. To meet these objectives, Microsoft had to first design a robust core that could handle not only today's needs, but could also be easily extended to support future needs.
This chapter discusses NT's architecture and how it is leveraged to achieve these goals.
An Overview of Windows NT's Architecture
The two major buzzwords to remember when talking about Windows NT's architecture are modular and client/server.
Modular means that the core internals are broken down into small, discrete units that serve clear and well defined purposes, as shown in Figure 2.1. Modularity is a very desirable goal in all aspects of computer programming, and operating systems are no exception. Modular code is much easier to maintain because it has a clearly understood purpose, and entire code segments can be replaced without affecting routines that rely on it for services.
The modular design concept contrasts with the monolithic design methods used more often by earlier operating systems. In the monolithic design, the operating system runs in a privileged processor mode (discussed shortly) and blocks of code often provide many functions with little clear delineation, as shown in Figure 2.2. This allows for smaller and tighter code, but also makes the system less adaptable.
When people hear that Windows NT is a client/server operating system, they often assume this refers to NT's capability to be used in client/server database or network systems. While NT is an excellent choice for these applications, this is not what is meant when referring to NT's architecture. What is meant is that the internal pieces of NT communicate based on a client/server paradigm. More specifically, client/server refers to the organizational layout of NTs modular components, as shown in Figure 2.3. When a piece of code needs something, it is considered the client. The piece of code that fulfills the request is the server. For example, a user program that needs to draw a picture on the screen is a client. It uses a clearly defined message to ask another piece of code (in this case probably the Win32 subsystem) to draw the picture. The Win32 subsystem in this case is the server. Thus, client/server.
One thing that NT shares with most advanced operating systems today is the division of operating system tasks into multiple categories, which are correlated with actual modes supported by the microprocessor. Most microprocessors today support multiple modes, sometimes called rings, where programs can run. These mode provide the programs running inside them with different levels of privileges to access hardware and other programs running on the system. Windows NT uses a privileged mode and an unprivileged mode, usually referred to as kernel mode and user mode, respectively. Components that run in kernel mode have direct access to hardware and software resources on the system. Under Windows NT, only key pieces of the operating system are permitted to run in kernel mode. This is done to ensure the security and stability of the system. The NT Executivewhich includes the microkernel the hardware abstraction layer, and device driversis the only piece of Windows NT that run in a processors privileged kernel mode.
Kernel-mode applications are protected from accidental or intentional tampering by the actual design of the microprocessor, while user-mode applications are protected from each other by the design of the operating system.
All programs not running in kernel mode run in user mode. The majority of code on Windows NT runs in user mode, including the environment subsystems, such as the Win32 subsystem and POSIX subsystem, and all user applications. These programs only have access to their own 32-bit memory addresses and interface with the rest of the system through client/server messaging, which will be described later.
With Windows NT, the creators tried to run as much of the operating system as possible in user mode. This helped to ensure the stability and security of the system, while at the same time simplified their job when they had to make modifications to underlying components.
Windows NT 4.0 brings a major architectural change to the NT operating system. They have moved two of the major subsystems, the USER and GDI code sections, into the NT Executive, which runs in kernel mode. While this was done to increase performance, and lower overhead, some people argue that the penalty will be reduced reliability. More on this design change is discussed later in this chapter.
NT Architecture Components
In order to understand how and why Windows NT works, it is important to take a look at the different pieces of the operating system and how they interact. Now that we understand a little about the premises behind NT, let's delve a little deeper. Figure 2.4 shows the major layers of Windows NT and their logical relationships.
The four major pieces of the NT architecture follow:
Hardware Abstraction Layer (HAL)
NT Executive Services
Each piece in this model plays a well-defined role in making Windows NT work.
Hardware Abstraction Layer
The Hardware Abstraction Layer (HAL) is a software interface between the hardware and the rest of the operating system. The HAL is implemented as a dynamically-linked library (DLL) and is responsible for shielding the rest of NT from hardware specifics such as interrupt controllers and I/O interfaces. This abstraction makes NT more portable because the rest of the operating system does not care what physical platform it is running on. Each hardware platform that NT runs on requires a specialized HAL. The design intent is that when NT is ported to a new processor architecture, the HAL gets rewritten for the new processor, but the rest of NT can simply be recompiled, thus making NT extremely portable.
The HAL also provides the interface for symmetric multiprocessing (SMP). NT Server ships with two HALs for each processor architecture (Intel, MIPS, PowerPC, and Alpha). The first HAL is used for supporting a single processor, while the second HAL supports up to four processors. Additional HALs are available from hardware vendors and can provide support for up to 32 processors on NT Server.
For each physical processor that resides in the computer, the HAL presents a virtualized processor to the microkernel. The intent is that this virtualized processor hides all the special characteristics of the processor itself from the operating system. For example if you had two multiprocessor systems, one running with Intel Pentium processors, and the other running with DEC Alpha AXP processors, the HALs on each system would be different, but the virtualized processor that the HAL presents to the microkernel would be identical in both cases. On an SMP system, for each physical processor in the system, the HAL presents one virtualized processor to the microkernel, as shown in Figure 2.5, which represents a three-processor Intel Pentium system.
Although the intent of the HAL is to reduce the amount of hardware dependencies and make NT more portable, in reality, it's not always quite so simple, but by minimizing the dependencies on physical hardware characteristics the designers of NT have reduced the time and effort needed to move the operating system to a new platform.
During the initial development phase for Windows NT, all initial coding was done on hardware powered by Intel's I860 RISC chip. However, because of dwindling support for the chip by Intel, as well as design problems encountered during development, the chip was abandoned and the development effort was moved to the MIPS chipset with minimal problems. This is a perfect example of the portability of the NT operating system.
The HAL can only be accessed by components of the NT Executive, and is never called directly by user-mode programs. Also, the HAL is intended to be the only piece of software on an NT system that is permitted to talk directly to the hardware. The advantage is that rogue programs cannot purposefully or accidentally write information to the hardware and cause a system crash. Also, preventing programs from reading information directly from the hardware helps to support NT's security model.
Although the goal in Windows NT is to have all hardware-related calls go through the HAL, the reality is that a small number of device driver and kernel calls bypass the HAL and directly interact with the hardware.
The downside of the HAL model is that it is the biggest single cause of incompatibility with older DOS and Windows programs, which were in the habit of reading and writing directly to hardware. However, this incompatibility is a small price to pay for the protection and portability afforded by the HAL.
The kernel in Windows NT is like the President of the United States. The kernel is ultimately responsible for all actions on the system and almost all functions on the system pass through the kernel. Windows NT uses a microkernel, which essentially means that the kernel was pared down to the basics necessary to function.
Do not confuse the kernelor microkernel with kernel mode. While they are related, they are not the same thing. The kernel is a discrete piece of code that makes up the core of the operating system. Kernel mode is a privileged state of operations supported by the microprocessor. In Windows NT, the microkernel runs in kernel mode, which means that it runs in a privileged processor mode, where the microprocessor is responsible for protecting the kernel from harm.
This microkernel design in Windows NT assigns many of the functions normally assigned to the kernel in traditional operating systems to a group of programs called the NT Executive. The NT Executive, of which the NT microkernel is a part, runs in the processors privileged kernel mode. The NT microkernel communicates with the NT Executive through a set of low-level operating system primitives.
Threads and processes are defined later in this chapter in a section called Process Manager.
The major role of the kernel in Windows NT is to dispatch and schedule threads. A thread is a code segment belonging to a particular process. Each thread is assigned a priority number from 0 to 31. The kernel dispatches threads to run on available processors based on their priority numbers. The kernel then allows the threads to execute for a particular amount of time before preempting them and allowing another process to run.
Sometimes you see it written that the kernel schedules processes. While this is not technically correct, it is commonly stated this way for ease of explanation. The kernel does not actually schedule processes, it only schedules threads in the context of a process. For more on the distinction between processes and threads, see the section Process Manager, later in this chapter.
It is this procedure that makes preemptive multitasking possible. Because it is the kernel that schedules the execution of all code on the system, it cannot be preempted. It also cannot be paged to disk for any reason.
On a multiprocessor system, a copy of the kernel actually runs on each processor. These kernel segments are used to maintain coherency of shared system resources that need to be accessed by threads running on all processors.
The kernel is also responsible for handling system interrupts from physical devices such as I/O devices, processor clocks, or timers. Normally, when there is a system interrupt, the kernel will preempt a running thread to process the interrupt.
Additionally, the kernel handles processor exceptions. These exceptions occur when the processor is made to do something it doesn't permit, such as writing to a locked portion of memory or dividing by zero.
The final use of the kernel in Windows NT is to provide support for power failure recovery. If the NT system is equipped with an intelligent uninterruptable power supply (UPS), the kernel is notified when a power failure is detected. The kernel then coordinates an orderly shutdown of the system, which includes notifying I/O devices of the power failure and allowing them to reset accordingly.
Because the kernel is involved in almost every action taken on an NT system, critical portions of the kernel are written in assembly language. This ensures that it can run as fast and efficiently as possible. For this reason, kernel optimization is a critical factor of performance when NT is ported to different architectures.
The NT Executive
Continuing the analogy that the NT kernel is like the President of the United States, the NT Executive is like his direct staff. (The President is the head of the Executive branch, much like the kernel is the head of the NT Executive.) The NT Executive takes care of the important tasks that are vital to the entire system, but the kernel is too busy to address directly.
A clear, concise definition is that the NT Executive provides the operating system fundamentals that can be provided to all other applications running on the system. This includes services such as object management, virtual memory management, I/O management, and process management.
Remember, the NT kernel is actually part of the NT Executive.
The NT Executive runs exclusively in kernel mode and is called by the protected environment subsystems when they need services. Because of the hierarchy of Windows NT, user applications do not call pieces of the NT Executive directly, but rather request services from the environment subsystems, such as the Win32 and POSIX subsystems, which then in turn call the NT Executive components.
There are functions inside the NT Executive that are not exposed by existing API sets. This is because the designers of Windows NT tried to include hooksor placeholdersin the operating system to provide room for future growth.
Aside from the kernel itself, the major pieces of the NT Executive are as follows:
Virtual Memory Manager
Local Procedure Call Facility
Security Reference Monitor
Let's take a few moments to look at these other pieces of the NT Executive and see what they do and how they interact.
The Object Manager piece of the NT Executive is used to create, modify, and delete objects used by all the systems that make up the NT Executive. Objects are abstract data types that are used to represent operating system resources. It also provides information on the status of objects to the rest of the operating system.
Objects can be concrete, such as device port, or they can be more abstract, such as a thread. When an object is created, it is given a name by which other programs can access the object. When another process wants to access the object, it requests an object handle from the Object Manager. The object handle provides a pointer that is used to locate the actual object, as well as a access control information that tells how the object can be accessed. This access control information is provided by the NT security subsystem.
The Object Manager also makes sure an object does not consume too many resources (usually system memory) by maintaining quotas for different object types.
In addition, the Object Manager is responsible for cleaning up orphaned objects that seem to have no owner. This is known as garbage collection. Lack of a similar facility in Windows 3.x was a major cause of trouble. In Windows 3.x, if a program crashed, or if it didn't handle system resources properly, the system resources it consumed would not be properly returned to the available system pool, resulting in an error message about the lack of system resources. In effect, this was a memory leak.
The Process Manager is responsible for creating, removing, and modifying the states of all processes and threads. It also provides information on the status of processes and threads to the rest of the system.
Note: A process, by definition, includes a virtual address space, one or more threads, a piece of executable program code, and a set of system resources. A thread is an executable object that belongs to a single process and contains a program counter, which points to its current position in the processs executable code segment, two stacks, and a set of register values.
The Process Manager, like all members of the NT Executive, plays a vital role in the operation of the entire system. When an application is started, it is created as a process, which requires a call to the Process Manager. Because every process must have at least one thread, the Process Manager is invoked again to create a thread, as represented by the flow diagram in Figure 2.5.
Flow diagram showing the calls to the Process Manager when an application is started.
The Process Manager is used to manage threads, but it does not have its own set of policies about how and when processes and threads should be scheduled. These policies are determined by the microkernel itself.
Virtual Memory Manager
The Virtual Memory Manager (VMM) provides management of the system's virtual memory pool. Virtual memory is a scheme that allows disk resources to be used instead of physical system memory by moving pages out to disk when they are not in use and retrieving them when they are needed. This is an integral piece of Windows NT, which allocates a 32-bit address space to each process regardless of the actual amount of physical memory in the system.
Each process is allocated a 4GB virtual memory space. Of this space, the upper two gigabytes is reserved for system use, while the lower 2GB is for the process's use. The process addresses memory as if it was the only thing around. The Virtual Memory Manger is responsible for translating the process's memory addresses into actual system memory addresses. If the process's memory address refers to a piece of memory that has been paged to disk, the VMM retrieves the page from disk.
Local Procedure Call Facility
The Local Procedure Call (LPC) Facility is integral to the client/server design of Windows NT. It is the interface between all client and server processes running on a local Windows NT system.
The LPC structure is very similar to the remote procedure calls (RPC), except that it is optimized forand only supportscommunication between client and server processes on a local machine. More specifically, the LPC is a mechanism that enables two threads in different processes to exchange information.
Remember we said that the Win32 subsystem is a user-mode application and runs in its own memory space. When a program wants to communicate with the Win32 subsystem to request services, it calls a stub function from the appropriate DLL file. This stub function then uses the LPC facility to pass the request to the Win32 subsystem process, which processes the request and performs the requested action and returns any completion message through the LPC.
Security Reference Monitor
The Security Reference Monitor (SRM) is the bedrock of the all security on a Windows NT system and is responsible for enforcing all security policies on the local computer.
It does this by working together with the logon process and local security authority runtime subsystems. When a user logs onto the Windows NT system and his or her credentials are verified, the logon process subsystem requests a security access token (SAT) for the user. The SAT contains a list of the user's privileges and group memberships. This is used as a key for that user during this logon session. Whenever the user wants to do something, the SAT is presented and used to determine if the user can perform that actions.
This is where the SRM works closely with the Object Manager. Each time a user tries to access an object, the Object Manager creates a handle for accessing the object and calls the SRM to determine the level of access to be granted by the handle. The SRM uses information contained in the users access token, and compares it to the access control list on the object to see what if the user should be granted the requested level of access to the object. In this way, the SRM has control over the security of all object access in Windows NT.
The I/O Manager is responsible for coordinating and processing all system input and output. It oversees the device drivers, installable file systems, network redirectors and the system cache.
The I/O Manager takes care of the black magic that is often necessary to make various devices talk to each other and live together in piece. It removes the traditional monolithic method of designing I/O drivers and presents a layered approach that supports mix and matching of component as necessary.
Protected Environment Subsystems
Two of the design goals of Windows NT were personality and compatibility. These are both achieved through the protected environment subsystems.
Personality essentially means that Windows NT exposes multiple sets of application programming interfaces (APIs) and can effectively act as if it were a different operating system. Windows NT comes with a POSIX and OS/2 personality in addition to its Win32, Win16, and DOS personalities.
Although multiple personalities in people is considered a bad thing, in operating systems it provides an effective way for the system to maintain compatibility. Windows NT would not have been such a success if it had been completely unable to run any existing DOS and Windows software.
In Windows NT, there are three protected environment subsystems:
Although you might see the Win16 and DOS personalities included in a list of protected environment subsystems, they are actually both part of the Win32 subsystem.
The protected environment subsystems act as mediators between the user-level applications and the NT Executive.
Remember we said that the NT Executive and all its components live in kernel-mode, while essentially everything else lives in user-mode. This includes all environment subsystems, which function completely in user-mode. When an application makes a call to an environmental subsystem, it is passed through a system services layer to the NT Executive.
Each environment subsystem keeps track of its own processes and works independently of the other subsystems. Each application can run only in the subsystem for which it was designed. When you launch an application in Windows NT, it looks at the image header for the file and determines which subsystem to run the application in.
Let's take a look at how each of these subsystems work.
Win32 is the native and primary subsystem for Windows NT. The basis for this subsystem is the Win32 set of APIs, which were written during the development of the NT product. Many of these APIs are direct extensions of their Win16 counterparts.
Win32 is the name of both the API and the NT subsystem for services Win32 API-related calls.
For the first year and a half of its design, the OS/2 Presentation Manager was scheduled to be the default and primary subsystem for Windows NT. However, with the success of Windows 3.x, Microsoft decided to use the Windows interface and related APIs as the primary personality.
In the client/server model we discussed previously, the Win32 subsystem acts as a server for all the other environment subsystems supported on Windows NT. The other environment subsystems act as clients and translate their API calls into the appropriate Win32 APIs, which get serviced by the Win32 subsystem.
The Win32 subsystem is responsible for all user input and output. It owns the display, keyboard, and mouse. When other subsystems, such as OS/2 or POSIX, need to take advantage of these devices, they request the services from the Win32 subsystem.
When originally designing the Win32 subsystem, NT's creators tried to make its overall functioning as close as possible to Windows 3.x. This resulted in a design with five major pieces: the window manager (often called USER), the graphics device interface (GDI), the console, operating system functions, and Win32 graphics device drivers, as show in Figure 2.6.
However, in Windows NT 4.0 the organization of the Win32 subsystem has changed. You'll notice in the preceding figure that the graphics device interface (GDI) and window manager (USER) are included inside the Win32 subsystem.
Because we've already identified that all other subsystems use the Win32 API for user input and output, the NT team removed the GDI and USER sections from the Win32 subsystem and moved them into the NT Executive.
This helped to reduce the overhead for all processes on the system that took advantage of any services requiring these code servers.
There has been a lot of speculation about what this move has done to the stability of Windows NT. Many people criticize this move arguing that allowing more code to run in kernel mode increases the likelihood of system crashes, because kernel-mode processes have access to system resources.
I don't think this is true. With the original modelwhere the GDI and USER were in the Win32 subsystemall programs that relied on the GDI or USER services would fail to respond if there were problems, which resulted in the entire user interface locking up.
However, by moving these two pieces entirely into the NT Executive, the kernel can keep an eye on them. If they fail to respond, rather than having the system lock up, the kernel can issue a bug check (the NT blue screen of death), and allow the system to reboot.
In most instances, this is a more desirable result because allowing the NT kernel to reboot the system when an essential service fails is considerably better than having the system lock up and be completely unusable.
There are people who argue the pros and cons for both methods. However, the ultimate decision about the stability of this new model will be decided by time and testing of Windows NT 4.0.
MS-DOS and Win16
One of the keys to success for Windows NT was the capability to run most Windows 3.x and DOS applications. During the initial development period for Windows NT, there was some mixed feelings between the design team and the Microsoft management as to whether of not Windows NT should be able to run these programs.
Microsoft management recognized that if Windows NT was not backwards compatible, users would have to make a tremendous investment to upgrade their current software. This alone would make Windows NT prohibitively expensive. So the decision was made to support the 16-bit Windows programs as well as DOS applications.
The decision to support these personalities was easily accommodated by NT's robust design.
Some of the goals were as follows:
to enable DOS programs to run without modification
to provide the capability to run the majority of 16-bit Windows applications without modification
to protect the system and other 32-bit applications from interference from the 16-bit and DOS programs
to enable the RISC platforms to run 16-bit Windows and DOS programs
to provide a mechanism for sharing of data between 32-bit and 16-bit Windows programs
Many people think of Windows 3.x as an operating system. Technically, it is not a true operating system, but rather a user interface that sits on top of DOSthe true operating system.
So, the first step in providing compatibility was to create a DOS environment. The DOS environment in Windows NT is called the virtual DOS machine (VDM), also referred to the NTVDM. The VDM is a full 32-bit user-mode application that requests services from the Win32 subsystem and occasionally directly from the NT system services layer. It is based on DOS 5.0 and provides compatibility as such.
Windows NT enables you to run as many DOS applications as you want, and each application runs in its own VDM. Because the VDMs are nothing more than normal processes under Windows NT, they are preemptively multitasked along with other processes on the system. Therefore, it can be said that Windows NT allows for the preemptive multitasking of DOS programs.
One of the additional features of the VDM is that it gives the user over 620KB of free "conventional" memory. The miraculous thing about this is that it also gives the DOS application full mouse, network, and CD-ROM support. This is more free memory than you could ever hope to get on an equivalent DOS system with the same services loaded.
Much as Windows 3.x relies on the services provided by DOS, the Win16 subsystem on Windows NT relies on the Windows NT VDM. The 16-bit Windows emulator that in a VDM is called WOW, which stands for Windows on Win32. Because it lives inside the VDM, it requests most of its services from the VDM the same way that standard Windows 3.1 requests services from DOS. The VDM then converts most of these calls directly into calls that are sent to the Win32 subsystem.
When a 16-bit Windows program makes a Win16 API call, the WOW subsystem uses a process called thunking to convert this call to an equivalent Win32 API call, which is then passed through to the Win32 subsystem. Likewise, when data from a Win32 call needs to be returned to a Win16 application, it must also be thunked.
Thunking is necessary because there must be a standard set of rules when converting from 16-bit data formats to 32-bit formats and vice-versa. Going from 16-bit to 32-bit is easy because you simply pad the extra 16 bits with zeros. However going from 32-bit to 16-bit by simply dropping 16 bits would surely result in data loss. The thunking process is actually performed inside the Win32 subsystem, as shown in Figure 2.7.
Thunking converts 16-bit API calls to 32-bit API calls, and vice-versa.
Because the Windows 3.x environment used a shared memory model, many programs were written to expect, and even require, this shared address space. To help maintain compatibility with these applications, all 16-bit Windows programs are run in a single copy VDM.
The WOW subsystem is not multithreaded, and 16-bit Windows applications are cooperatively multitasked against each other, just like they would be on a real Windows system.
Okay. I lied. Each time you launch a 16-bit Windows application, a new thread is created, so effectively the WOW is multithreaded. However, the microkernel schedules these threads in a different way than the rest of the threads on the system.
Normally, the NT kernel schedules threads based on priority. When a running thread is preempted, the kernel passes control to the next thread based on priority. The kernel treats the WOW threads differently. When it preempts the currently running WOW thread, it runs other threads just as it normally would. However, it will not schedule execution for any other WOW thread until this WOW thread gives up control.
So while the WOW actually has more than one thread, they are not actually taken advantage of. Remember, this is not a design flaw, but was done very carefully to maintain compatibility with existing 16-bit software which would break if it were preemptively multitasked against its fellow 16-bit applications.
Also, because all the applications in the WOW run in a single, shared memory space, if one 16-bit Windows application fails, it can cause all the 16-bit Windows applications running in that WOW subsystem to fail. However, it would not in any way affect the system itself, or any 32-bit applications running on it.
Remember the difference: all DOS programs are run in separate VDMs and are preemptively multitasked. Sixteen-bit Windows applications, on the other hand, are run in a single VDM and are cooperatively multitasked against each other, but preemptively multitasked against the rest of the system.
Because the DOS and Windows 3.x applications make heavy use of Intel assembly language code, it was tricky to get these programs to work unmodified on RISC systems supported by Windows NT. However, this was accomplished with a little ingenuity. The VDM breaks down all calls into an instruction execution unit (IEU). On the Intel architecture, these calls can be directly executed by the processor. On RISC systems, they are converted by an Intel emulation routine written by Insignia Solutions, Ltd. (These are the same people who write SoftWindows for the Macintosh.)
In versions of Windows NT prior to 4.0, the emulation code was based on a 286-class processor. This caused problems in that 16-bit Windows programs that required 386 enhanced mode could not be run on Windows NT under the WOW emulation on RISC processors. For example, programs like Microsoft Office, which requires 386 enhanced mode, could not run on RISC platforms.
One of the major advancements of Windows NT 4.0 is that this emulation routine was upgraded to include full compatibility with the 486 instruction set.
As I have identified in other places, Microsoft paid close attention to various open systems standards when developing Windows NT. They recognized the value of supporting open systems as a method for gaining acceptance of their new advanced operating system into the market.
One of the most frequently cited standard supported by Windows NT is its POSIX compliance. POSIX stands for portable operating system interface and was developed by the IEEE as a method of providing application portability on UNIX platforms. However, POSIX has been integrated into many non-UNIX systems.
There are many levels of POSIX compliance ranging from POSIX.0 to POSIX.12. These levels represent an evolving set of proposals, not all of which have been ratified as standards.
The POSIX subsystem in Windows NT is POSIX.1 compliant. POSIX.1 compliance requires a bare minimum of services, which are provided by Windows NT. When a POSIX application runs on Windows NT, the POSIX subsystem is loaded and it translates the C language API callsrequired for POSIX.1 supportinto Win32 API calls, which are then serviced by the Win32 subsystem.
Because of the limited nature of POSIX.1, the POSIX subsystem on Windows NT does not provide any support for networking or system security.
Some of the features required by POSIX.1 follow:
Case-sensitive filenames, which are supported by the NTFS file system
Multiple filename support (hard links), which are also supported by NTFS
File system traverse checking, which is controlled by the User right bypass traverse checking. In order to enable POSIX compliance for a particular user, you must remove this user right from that user.
Each POSIX application runs in its own memory address space, and are preemptively multitasked.
Different people have different opinions on why the POSIX subsystem was included with Windows NT. Some people think that is was done to increase application availability for end users. Some people think it was done in the noble pursuit of conforming to open systems standards. Yet others think it was done to demonstrate the interoperability of NT with UNIX platforms.
My take on this is different. While I accept that the POSIX subsystem is well implemented, I personally think the entire reason for the inclusion of the POSIX subsystem was to meet government buying criteria. For those of you not familiar with the process, it is not uncommon for government agencies to require that a particular system meet criteria based on various open system standards. One such example is the Federal Information Processing Standards (FIPS). By including the POSIX subsystem, Windows NT can be sold into large markets where it might easily have been excluded from for lack of support for this standard.
So you see, it's really a marketing ploy to increase NT's market penetration and to prevent people from using sometimes irrelevant purchasing criteria to exclude Windows NT from their environment.
NT is a great operating system. While the ease with which the POSIX subsystem was included with NT is indicative of NT's power and flexibility, don't get fooled into making too much of the POSIX subsystem.
Windows NT was originally slated to be the next generation of OS/2. As such, it would have included the OS/2 Presentation Manager interface as its primary interface and would have run all standard OS/2 applications, including character-based and GUI-based programs.
However, when the decision was made to give NT the Windows interface and to build it as the successor for the Windows platform, the emphasis on OS/2 support was diminished.
The result was an OS/2 subsystem capable of running standard OS/2 1.x character-mode applications. It cannot run OS/2 2.x graphical applications.
The OS/2 subsystem only works on Intel-based systems, not on RISC platforms.
The OS/2 subsystem is implemented as a protected environment subsystem, much like the POSIX subsystem. It translates OS/2 API calls into Win32 API calls that are serviced by the Win32 subsystem.
The OS/2 subsystem and each OS/2 application runs in its own 32-bit protected memory space and are preemptively multitasked in respect to each other and in respect to other applications running on the system.
In addition to a core set of OS/2 APIs, the NT OS/2 subsystem implements many LAN Manager APIs, including NetBIOS, mailslots, and named pipes. In this way it differs from the POSIX subsystem, which exposes no API support for networking.
Windows NT is a modular and well-planned operating system that provides a robust structure for todays most demanding applications. In addition, it provides room for growth, while adhering to its original design goals. This chapter presented you with an introduction to the architecture of Windows NT, and an insight into what makes Windows NT the best enterprise platform for both small businesses needs and missions-critical client/server applications.
The chapter began with an overview of the components of the NT architecture, as well as an introduction to some concepts required for discussion of the NT architecture, such as the difference between the privileged kernel mode, and unprivileged user mode. It continued with a more in-depth look at the NT components, including the Hardware Abstraction Layer (HAL), the NT Kernel, the NT Executive services, and the protected environment subsystems.
In each of these sections, you saw the sub-components and how they worked together to provide a rich client/server environment. For example, you saw the interoperation of the members of the NT Executive, including the Object Manager, Process Manager, Virtual Memory Manager, Local Procedure Call Facility, Security Reference Monitor, and I/O Manager, and how they provided services to NTs microkernel, as well as to the protected environment subsystems.
The chapter ended with a look at each of the protected environment subsystems, Win32, POSIX, and OS/2. When looking at the Win32 subsystem, you saw its key relationship to the rest of the NT environment, and how it supported not only Win32 applications, but also 16-bit Windows applications, DOS applications, and I/O services for the POSIX and OS/2 subsystems.
This architecture is what sets Windows NT apart from Windows 3.x and Windows 95, which, for compatibility reasons, could not build a new architecture from the ground up, but rather had to build on the crippled and antiquated foundation of DOS. It is also from this architecture that NT gets its incredible performance, and reliability that make Windows NT Server such a desirable server and network operating system.