Microsoft WINNT Tutorial - softlookup.com



Free Tutorials
Internet
What is Internet
Internet Games
Learn TCP IP
HTML
Learn HTML
Learn CSS
Learn XML
Learn WML
Database
Learn Access
Learn Data-VB
Learn Oracle
Learn SQL
Programming
Learn ActiveX
Learn C++
Learn CGI_Perl
Learn Interdev
Learn Java
Learn JavaScript
Learn Vbscript
Learn VisualBasic
Learn VC++
Operating systems
Learn RedHat
Learn Unix
Learn Winnt


Previous Page Main Page Next Page



27

Windows NT Server as a Database Server

No one I know of has ever accused Microsoft of lacking ambition. The Windows NT Server product is certainly no exception to this rule. Not only did the design team intentionally take on industry-leader Novell in the file server and print server markets, but they targeted the client/server database server market as well. About the only computer market they aren't in with NT Server is the "dumb" terminal market of the mainframes (although you can get Telnet servers that enable NT Servers to do some of this type of work). This chapter addresses the use of NT Servers to operate database management systems that are accessed by remote clients.

This is somewhat of a challenge. There are entire guides written about the various database management systems you can purchase for Windows NT. These guides cover topics such as database administration, application development, and even specialized topics such as database tuning. Obviously, I cannot cover all these topics in one chapter. Instead, I focus on the interaction between the more common database management systems (Oracle, Microsoft SQL Server, and Microsoft Access's Jet) and the Windows NT system. The chapter is divided into the following parts:

  • First impressions of databases under Windows NT
  • Types of database management systems
  • Interactions between the operating system and database management system
  • Oracle under Windows NT
  • Microsoft SQL Server under Windows NT
  • Microsoft Access's Jet database engine under Windows NT
  • A quick discussion of ODBC and OLE

First Impressions of Databases Under Windows NT

My first exposure to Windows NT was when I was asked to serve as database administrator for an Oracle Workgroup Server database. The goal of this project was to store a fair amount of data within this database and to use some of the more advanced (and demanding) features of the database, such as storing audio in the database. I was more than a bit skeptical, coming from the large UNIX server environment. I was concerned that even if Microsoft had done a good job on the operating system, the limited processing and data transfer capabilities of a PC-based server would quickly overload the system and cause miserable response times.

The good news is that I was actually quite surprised. Even though the Windows NT product had not been out on the market very long, and its main focus at the time was combating NetWare as a file and print server, I was able to get more than adequate response times in my development environment. Don't get me wrong[md]I wouldn't try to put a multi-terabyte database on a Pentium PC server (although it might be interesting to see a DEC Alpha running NT trying this out). However, most databases are just not that big. Also, with distributed database and database replication technologies coming into their own, you can often design a series of smaller databases that accomplish the same functions as a single large database in traditional environments.

When you work on an NT Server for such functions as word processing, you will notice that it seems very slow when compared with the NT workstation product on the same computer (or even a smaller computer with less memory). You might be the only one in the building and there are no other jobs running on the computer. This is actually an intentional design feature that distinguishes the server version of NT from the workstation version. The server version is tuned to respond to background server processes (services) that are designed to service other users who might want some of your server's computing capacity. The person on the server's keyboard gets leftover CPU cycles. This is a key feature that enables NT Server to function as a database server. Almost all the big, multi-user database management systems use a series of background processes to store and retrieve information from the database and perform all the necessary housekeeping functions. Therefore, when running a database management system, you need to have a system that gives these background processes the time they need to keep information flowing between the database and the users. Keep this in mind if your boss ever proposes that you can use the server as your personal PC to save money.

Types of Database Management Systems

I bet that title scared some of you. Fear of a lengthy discussion about the relative merits of the hierarchical model versus the relational model flashed through your head. What this section is actually devoted to is a discussion of the alternative architectures from the operating system's point of view. Database management systems are among the most challenging applications I have run across when it comes to using operating system resources. They are always demanding on memory and, depending on the types of processing, they often stress the input/output subsystems.

All too often, when a database server is performing poorly, the system administrator blames the database, and the database administrator blames the server and operating system. Of course, everybody blames the programmers who wrote the applications. With a little knowledge in the hands of the system administrators, many of these problems could be avoided. This section starts this discussion with a presentation of the various architectural alternatives for database management systems on NT servers that you are likely to come across. These include the following:

  • Client/server versus host/terminal processing
  • File access versus background processes for data access
  • Distributed databases
  • Database replication

The first industry buzz words I throw at you in this section are client/server computing. Quite simply, this process involves applications that divide their processing between two or more computers. This is a simplified discussion, but it is good enough for what I am working with here. The contrast to client/server computing is host/terminal computing. Figure 27.1 illustrates these two architectures.

Figure 27.1

Client/server versus host/terminal computing.

It is often easiest to start with the host/terminal computing architecture. It was the first computer user access architecture to evolve after the days of punched cards. It involves hooking a controller to the big central computer that can take input from a number of terminals. These terminals are usually referred to as "dumb" terminals because they have very limited memory and almost no capacity to process information. They merely display information in the format the central computer has given them.

The alternative, client/server computing, splits the work between several computers, each of which has some degree of "smarts." This can be thought of as a divide and conquer approach for computer systems. The user sits at a workstation that performs most of the display functions and possibly some of the data analysis. The server is assigned the duties of data access and transmission.

Once you have grasped the basic concepts of client/server computing, I have another wrinkle to throw at you. There are actually multiple flavors of client/server computing. The differences lie primarily in how the labor is divided between the client and the server. Although the client is typically assigned the display functions and the server the database access, the question comes up as to where the business processing functions are performed. Although there are many minor variations between different vendor products, the key concepts to grasp are the two-tier and three-tier client/server architectures. There are guides devoted to the intricate details of client/server architectures if you are interested; however, this is a Windows NT guide, so you can get away with just understanding the basics. The two-tier and three-tier architectures are illustrated in Figure 27.2.

Figure 27.2

Two-tier and three-tier client/server architectures.

The key point to pick up here is that the two-tier architecture splits business processing between the user's client workstations and the database server. The machines might become overloaded as you try to implement a large number of users running complex business processes. Remember, your poor NT Server might be asked to support dozens or hundreds of users at a given time. Also, many of your users might still have older client workstations (such as 80386-based PCs). You might also need a large degree of coordination at a central site between users of certain applications. This is where another server whose sole function is to implement business processing logic comes into play. It interfaces with clients and the database to coordinate the business activities and off-load the other processors.

Now to tie these concepts to the Windows NT environment (yes, this is still an NT guide). Most of the Windows NT installations I have seen operate in a client/server mode. Although you can set up terminal functions under NT, it is really designed to use the built-in networking functions (NetBEUI, TCP/IP, and IPX/SPX, which are described in more detail in Chapter 7) to enable free communication between client processes and server processes. You might have an NT workstation running your front-end database applications (the ones that the users actually see) and an NT Server running the database management system. The business logic can be split between the client and server as the developers see fit or even off-loaded to an applications server (which could also be running NT). In a sense, Microsoft designed NT around the same framework that Sun Microsystems made so popular in the UNIX world[md]that of the computer actually being a collection of computing devices located across a network that come together to serve the user's needs. This is actually fairly easy once you have a network of devices that can communicate easily with one another, which I have found in the NT, Windows 95, and UNIX environments. I like to imagine this environment something like that presented in Figure 27.3.

Figure 27.3

Networked computers forming a virtual computer.

Now that you have the basics out of the way, I will introduce a few of the concepts that are creeping into real systems today. The basic theory presented earlier works well (divide and conquer to get the job done using client/server architectures). However, there are two circumstances that present problems when you actually try to implement these systems (see Figure 27.4):

  • How do you provide adequate communications capabilities for remote locations in your organization?
  • How do you link together the various departmental databases to enable your organization to obtain an overall picture of what's going on?

Figure 27.4

Problems with client/server implementations.

If you have not done a lot of work with computer networks, the question of adequate communications for remote locations might seem strange. However, the reality of the current market is that it is relatively inexpensive to transmit data signals over a range of a few hundred yards, but it gets expensive when you have to transmit the same volume of data over hundreds or even thousands of miles. The transmission lines running to other cities, or even within your own city, are controlled by communications companies that are out to make a profit and have access to a limited commodity (the transmission system). It can cost hundreds or thousands of dollars, per month, to maintain communications circuits. You might have had communications circuits for years when you used mainframe links; however, the communications companies charge not only by the distance, but also by the transmission capacity. You might be impressed with the vast amount of data that is at your fingertips with your new client/server applications. However, this high volume of data multiplied by the large number of users that now have access to your system require you to install a very robust communications system, and this can be expensive.

Now, permit me a slight diversion[md]the Internet. Almost everyone in the computer industry is talking about it (try to get Bill Gates or Larry Ellison to talk about anything else). The Internet can be thought of as a collection of networks someone else has already installed that you can connect to with just a local communications circuit (see Figure 27.5). The Internet could be the low cost transmission system everyone has been dreaming about. Windows NT is very well integrated with TCP/IP (and therefore Internet) communications, and NT Server actually powers a fair number of the services you find on the Internet. However, a few issues need to be worked out before the Internet can be used as the solution to everyone's problems. First, many organizations are gun-shy about the idea of transmitting their critical business data over a public network. Anyone could detect the packets, and your competitors might even be used as key links to various parts of the Internet. Many people are working on security for the Internet, and they will probably solve this problem soon. The second problem is the transmission capacity of the system. Much of the current Internet transmission capacity is funded in the United States with government funds. The pressures of cost-cutting, desires to limit transmission of pornography and, of course, a few suggestions from telephone company lobbyists might limit expansion of the current Internet transmission capacity. If a large number of companies start doing business on the already busy Internet, it might become overloaded, and performance might degrade. Anyway, it is something to consider if you are really strapped for communications capabilities to remote sites.

Figure 27.5

An overly simplified picture of the Internet.

Now back to the main discussion. The second problem that creeps into many client/server systems is that of increased demand to integrate data from various departments. When everyone had to live with nightly mainframe printouts, they had enough trouble wading through their existing printouts and relied on weekly reports from other departments to integrate business functions. Once they realize how easy it is to get at their business data using computer networks, they usually get demanding and want access to all the business data that affects them so they can make the business function better as a whole. This puts pressure on the information systems department to be able to link together a large number of databases that, taken together, are too large to fit on any single computer.

Seeing this trend in the market, the database and computer systems vendors have started to produce products to support the concept of distributed databases. This works out well for Windows NT because most NT servers are relatively small (such as high-end PC servers) and can be linked together to form a powerful computing environment. The key components of a distributed database include the following:

  • Good interfaces to operating system data transmission facilities (such as TCP/IP).
  • Client and server tools that can communicate with multiple databases at the same time.
  • Gateway products that link computers and database management systems from different vendors together in a relatively seamless manner from the user's point of view.
  • Database management systems that can deal with the time delays that might be involved with writing data to remote systems.
  • Efficient data transmission systems between the various computers and user workstations.

Windows NT supports this type of environment in a variety of ways. First, it is very network-friendly. Because networking was part of its base design (as opposed to being a product added decades after the first release of the operating system, as is the case in many mainframes), it supports efficient links between applications (including database management systems) and its networking utilities. Also, with its multitasking features, it is capable of supporting numerous background processes working together behind the scenes to take care of the communications functions while the user interacts with the front-end application.

Before I leave this topic, I want to extend the concept of distributed databases just a bit. This seems to be a direction that database vendors (Oracle and Microsoft in its SQL Server product) seem to be heading. The basic concept is that of replicated databases. Imagine that you wish to have a single directory of employees and their associated personnel information synchronized throughout your entire multinational organization. You could have everybody log into the home office in Cleveland through an extensive corporate network to look up the phone number of the guy two floors up. However, there is another way. That way is that you have a master database in Cleveland that contains the verified and updated information as entered by the central personnel department. The computer network to Cleveland is very busy during the day, but it is used lightly at night. Because there is no great penalty for having telephone numbers that are a day old, why not transmit the updated personnel databases to all regional offices every night? This gets them the information they need and saves your limited communications bandwidth for important business processing.

Companies have been implementing applications that transfer selected bits of information between computers for years. They write code that transfers the information to tape or even transfers it across the network where another program loads up the information into the local database. Now imagine the you could tell your database management system to keep multiple tables in multiple databases in synch and have it take care of all the details. This is what database replication is all about. The database management systems build the utilities for you to take care of all those annoying details. This is especially attractive to many Windows NT installations, where you are running near capacity during normal business hours and don't have a lot of legacy software written in older development environments to hold you back. It is often worth spending some time investigating these technologies to save yourself weeks (that you might not have) developing complex routines or trying to justify additional network bandwidth to central office.

OK, so where am I now? The database administrator has laid a bit of database theory and product information on people who are interested in Windows NT, the operating system. This was not just an attempt to get sympathy from operating system proponents for the complexity of managing modern databases. Instead, it lays the foundation for the next section, which talks about the types of interactions between applications, database management systems, and the operating system, specifically Windows NT.

Interactions Between the Operating System and Database Management System

Perhaps you are in the position of having worked for a while using Windows NT as a file and print server. Someone tells you that they need to use your server to host a database for a critical corporate function (such as coordinating arrangements for use of the executive condominium in Barbados). It is important to understand the demands placed on your operating system and server computer. You can then compare these requirements with your current capabilities and load to ensure that you can support this new requirement.

For purposes of this discussion, I divide the interactions between the operating system and database management system into the following categories:

  • Shared server memory areas
  • Server background processes
  • Server disk storage capacity
  • Server input/output processing capacity
  • Network communications processing capacity
  • Network transmission capacity

I start with a discussion of shared server memory areas. This is the key to understanding the differences between your common print and file sharing functions under NT and database management systems. Most server database management systems use large areas of memory to store database and processing information. Why? The answer is simple: speed. Memory is several orders of magnitude faster to access than disk drives (even the good ones). Although it is OK to scan a small file of a few hundred kilobytes to find the information you need, you don't want to scan large database files of megabytes or even gigabytes.

With everyone wanting to get more for less, database vendors are in a big competition to see who can get more performance out of less hardware. They want to process more transactions per second or handle larger database warehouses. To do this, they have gotten extremely creative about devising ways to keep frequently accessed information readily available in memory and also using memory to cache information retrieved through efficient transfers from the disk storage files, as shown in Figure 27.6. This figure shows how Oracle uses memory areas. User processes interact with various memory areas while background processes transfer information from the disk drives to the memory areas. When the database management system developers do their jobs well, you will find that the data you want is usually located in the rapid-access memory areas and you therefore do not have to wait for the data to be transferred from the slower disk drives. The bottom line is that the big database management systems such as Oracle and SQL Server ask for a lot more memory than PC people are used to. You can easily see requirements for 8MB to 32MB just for the database itself (in addition to the operating system and all the other services running on the system).

Figure 27.6

Database shared memory areas.

This can be critical if you are running NT on Intel-based platforms. Although NT supports a large amount of memory (4GB), you rarely find an Intel-based computer that can actually hold that amount. This is especially true of computers that were designed several years ago, when very few applications in the PC world dreamed of needing this kind of memory. There are two types of limitation on memory. The first is the capacity of your motherboard to recognize the memory (many have limitations of 128MB or less). The other is the number of slots you have to hold memory (often 4 slots). This can be a problem, because RAM chips are among the most expensive components in a PC and you always get some on the motherboard when you buy the computer. If you want to upgrade a computer to 64MB of RAM and you have 32MB already, all you have to do is buy 32MB more, right? Maybe. You might also find that you have four 8MB memory chips in your machine that are filling all four of your available slots. Then your only solution is to remove your existing four memory chips and install four 16MB memory chips. Perhaps you can reallocate your other memory chips to users who have recently upgraded their workstations from Windows 3.1 to Windows NT Workstation or Windows 95 (they probably need as much memory as they can get), or sometimes you can get a trade-in deal from vendors who are hungry for your business.

One final note is on the banking of memory slots on the motherboard. It might seem like an annoying hardware detail, but it can come around to bite you. Imagine you have four memory slots, two of which are filled with 16MB memory chips. Your database administrator works with you to determine the memory requirements for a new database, and it turns out that you need an additional 16MB of memory. You just order another one of those 16MB chips and you'll still have room for future expansion, right? Once again, maybe. Computer motherboards often link two slots together in their logical designs and require that these two slots be filled with memory chips of the same size and type. Therefore, you might not be able to mix sizes or leave one slot in these banks empty.

The key to managing the database management system demands on your memory is to understand both the database's memory requirements and the capabilities of your system. The database administrator or vendor should be able to give you an idea of the required memory size. One thing I have always noted about databases is that they tend to grow rapidly, so it is good to allow yourself a little room to spare. You also need to understand the other needs and capabilities of your operating system. You might have to consult a hardware expert or crawl under the hood yourself to determine the amount of memory that is currently installed and the maximum amount of memory that can be installed in your computer. The Performance Monitor in the NT administrator utilities enables you to measure the usage of several resources, including memory, over a reasonable amount of time to understand the memory requirements of your operating system and other services. This tool is discussed in more detail in Chapter 19.


CAUTION:

When you run out of memory, your system starts swapping applications and data to the hard drive, and performance degrades rapidly. Avoid swapping whenever possible. Trust me, it hurts.


Now that I have solved all the memory problems you might run into, it's time to discuss the next load a database management system places on your system: server background processes. You might have wondered how those memory areas that give the database management system all that speed get maintained. What writes the changes from the memory areas to the data files on disk (which stick around after you turn the power off) and gets the anticipated data from disk to memory before the user needs it? The answer for most database management systems is a series of background processes (services and threads under NT) that perform specialized functions when needed (see Figure 27.7). Some of these processes also service user requests. One of the keys to modern database management systems is that the users tell the system what they want and the system figures out the most efficient way to get it. This works well because teams of people work on efficient search algorithms that you would never have time to implement if you had to do it yourself. The end result is a complex series of processes that need to have computer processing capacity to get their jobs done.

Figure 27.7

Background database services.

So how do these background processes impact the NT administrator? I would typically consider three key areas:

  • You have several CPU processes and threads that you can have running. Although NT is mostly a self-tuning operating system, you might have to get involved on larger servers that are running a number of applications.
  • You have to consider the interaction of the processes with one another. Your total throughput in a database management system is determined by the weakest link in the chain of processes that are used to complete the processing tasks. Many of these systems enable you to tune their background processes by allocating threads or creating additional server processes. It is important to understand your database management system and see where the processing bottlenecks are to determine where additional processing resources are required.

The next key resource to discuss is the disk storage system (see Figure 27.8). Databases with large volumes of information need to have a permanent home for the user data; this typically takes the form of an array of fixed disk drives. Databases often take up much more space than would ever be needed by print servers or file sharing. Therefore, it is important to get disk storage sizing estimates before starting a new database project.

Figure 27.8

A disk storage system.

It is also important to lay out the physical configuration of the disk drives. Many PC servers have a limited number of internal disk drive bays, and some of these bays are larger than others. You also have to consider the limitations of the number of drives on a given disk drive controlled (typically seven for SCSI disk drives and two for IDE disk drives). It turns into a bit of a puzzle to ensure that you have enough bays, cables, and controllers. Due to physical space limitations or drive bay limitations, you might have to remove older, smaller disk drives and replace them with larger capacity drives, especially on Intel-based servers.

Following are a few things you might not think of that might require some disk space:

  • The database management system software. (It can consume several hundred megabytes.)
  • Space for database management system log and dump files. (They can provide a lot of useful information. Don't forget to implement routines to clean them out every now and then.)
  • Space for any server-based applications you might be writing (for example, business processing logic for an application server in the three-tier client/server architecture).
  • Additional paging file space for the memory you might have added to accommodate your database.
  • A disk drive to provide you with a place to copy files when you are rearranging other disk drives. (It is sometimes hard to justify but can make your job so much easier. Try selling it as a hot spare for reliability and quick recovery.)

Another key system resource that you have to be concerned with, especially in some Intel-based NT systems, is the disk input/output capacity. Basically, this is how quickly you can transfer information from memory to the disk drives. Although most UNIX systems are built with high-speed SCSI disk drives and controllers, many Intel-based servers have relatively slower data transfer subsystems. In many database servers (even large UNIX boxes or mainframes), the data transfer speeds can be the limiting factor in the overall performance of the database. Therefore, you need to know a few tricks to ensure that you are doing the best you can to provide adequate data transfer capabilities (see Figure 27.9):

Figure 27.9

Keys to data transfer capacity.

  • Understand the transfer capacity and configuration of the controllers and disk drives. Not all controllers are created equally. SCSI-2 is twice as fast as regular SCSI, and it is faster than the IDE controllers you typically find on PCs (and you have to consider an enhanced IDE). Also, these controllers are connected to different buses on your computer (PCI is much faster than the ISA bus for Intel-based computers). Anyway, it might pay off to talk with vendors or your favorite hardware guru to see what you can do to arrange things to put the most demanding files on the most capable controllers and drives.
  • You need to balance the transfer needs of the database against those of other operating system services. See where the hot operating system files (such as the Windows NT directory or your print server spool files) are and try to avoid putting your hot database files on the same drive. This can evolve over time, so it can be useful to routinely look at input/output loading to see if some disks are getting a little too warm.
  • Keep an eye on the fragmentation of your disk. NT 4.0 does not come with a built-in defragmentation utility for FAT partitions, unlike many other operating systems (such as Windows 95). As files are added, resized, and deleted from the disk, the operating system might have to use multiple small chunks of disk to store the data (see Figure 27.10). This causes problems for the disk drives and controllers because they have to read one section, reposition the heads on the disk drive, and then read the next section. Disk drives read data much more quickly than they move their heads, so this can really slow down performance. It might be useful to pick up a third-party defragmentation utility or use the NTFS file system for disk drives that are likely to become fragmented.

    Figure 27.10

    Disk fragmentation.

  • Finally, even if you have a nicely defragmented disk drive storing your database files, you might have fragmentation within those data files. The database files on most database management systems contain many tables, which are stored in chunks. As you add rows to a table, it might have to get another chunk of data storage space within the data file that is not contiguous with the previous extent. Therefore, it is worthwhile to ask the database administrator if they have defragmented their database files recently if you are having input/output loading problems.

Next on the list of interactions between the database management system and the operating system is the network communications processing load it places on your server. Typically, NT handles tuning the communications processes you use to connect your database servers, application servers, and workstations together. There are, however, two things for you to consider when implementing the database client/server networking on your computers:

  • First, not all network cards are created equal. Some cards can process several times the data traffic of other cards, even on the same type of network (for example, 10 million bit per second Ethernet). Therefore, it might serve you to get a higher capacity card for your busy servers to ensure that they can keep up with all the workstations out there.
  • Second, some protocols are better for client/server communications than others. I have not had much luck using NetBEUI for Oracle databases. NetBEUI was designed to be a file and print server network protocol. It was not designed with the larger network features that you find in TCP/IP. Therefore, if you have a choice, try using TCP/IP, especially for larger client/server networks.

Last on the list of resources is something that is not in your NT computers and that the operating system lacks any control over. I'm talking about the transmission capacity of the network itself. Why mention it in a guide on NT? Typically, when performance is poor, the users call the database administrator, system administrator, or developers. Eventually, the finger usually falls on the servers and the operating system. You should check out all the things mentioned earlier that you do have control over, but also check out the network as a possible problem. Remember, it takes all the components of a client/server system working efficiently together to achieve overall system performance.

To really see if the network itself is causing the problem, you have to be familiar with the overall network topology. I have included Figure 27.11 as an example. You might have to fight like heck to get one of these drawings out of a network guru, but it is usually worth your effort. There are a few key items to look for when a problem occurs:

Figure 27.11

Network transmission capacity limitations.

  • If the problem exists only for a certain group that is physically separated from the other users, is there a bottleneck leading to that particular group, or is their network segment overloaded itself? Your network consultant should have or be able to get some network monitoring equipment that can investigate this issue. Also, you might be able to see the problem by visual inspection. For example, if you have 100 users located in a remote office that has a 56 kilobit per second line, and all your other offices have 10 users and communications lines several times faster, the problem might be the wide area network line to that first remote location. Remember, unless you do something special, your NT database server treats all connection requests equally.
  • Next, look to see if there is a pattern to the problems. One classic example is when network performance becomes poor at 8 am, 1 pm, and 4 pm. This typically corresponds with everyone logging in to check their e-mail first thing in the morning, right after lunch, and right before they go home. You might also notice a problem that occurs every day at 2 pm that corresponds with the time the engineers are transferring massive data files between computer systems. The solution to this type of problem is usually to try to juggle jobs (for example, moving the engineers' data transfer to 12 pm when everyone else is at lunch) to level the workload. Whatever the solution, this type of analysis combined with NT Performance Monitor data can save you many hours of work trying to figure out why your server is slowing down at certain times of the day.

Did you ever feel like a database administrator has locked you in a room and hit you with more database information than you ever wanted to know? There has been a method to my madness. One of the biggest problems I have run across in computer environments is a lack of understanding of the needs of other specialty groups. On a database server, the database is the reason the server exists. If you know a bit about what the database is doing to your operating system and server, you are in a better position to react to those demands and keep things running smoothly. I hate sitting in a room where one techie says it's not his subsystem that's causing the problem, it's the other subsystem that's to blame, and vice versa. When asked why, they usually resort to "it's obvious" or "I've tested my system thoroughly." If nothing else, you are better armed to determine exactly where the problem is and explain why the other people have to take corrective actions and how you have already taken your corrective actions.

Oracle Under Windows NT

If every database management system were alike, I would be finished with this chapter. However, there are many vendors in the highly competitive computer world. Each of them tries to do some things just a little differently to achieve better performance or cost less or fill some special niche, such as decision support systems. Therefore, I have chosen to examine three of the most common types of database management systems you are likely to run across so that you can get a feel for these differences.

The first system I have chosen is the one with the largest share of the overall database market (depending on what you measure). Oracle runs on computers ranging from mainframes to personal computers running Windows 3.1. They also have a wide product family, ranging from the database management system itself to development tools to full-fledged applications such as financial and manufacturing systems (see Figure 27.12).

Figure 27.12

The Oracle product family.

What would the average NT system administrator need to know about Oracle database management systems running on their server? I concentrate on the database because that is probably the most demanding component for most administrators. The basic architecture of the database management system consists of several memory areas (designated for different purposes and tunable by the DBA to be larger or smaller), a series of background processes (combined into one service per database instance under NT), and the database data files (see Figure 27.13).

Figure 27.13

The basics of the Oracle architecture.

To facilitate communications with your Oracle database administrator, a few definitions are in order:

  • Oracle Instance: This is a collection of background processes (in NT terms, it is a single service you are running) that performs all the interactions between the database and data files.
  • Oracle Database: This is the collection of data files and supporting files that contain the data the users want to get at.
  • Alert Log: This is a file that contains a record of activity (startups, shutdowns, problems, and so forth) of your Oracle Instance. You should know where it is to help you when problems come up. The database administrator sets this up in the Oracle initialization file.
  • Trace Files: When Oracle encounters a serious problem, it is almost always able to write a trace file that describes what the system was doing and what the problem was that caused it to crash.
  • SQL*Net: This is the Oracle utility that communicates with the underlying network software (such as TCP/IP). It must be located between both the application and the client networking software and the host networking software and the database. The versions have to match, as do the communication protocol options. SQL*Net version 2 has some network configuration files (TNSNAMES.ORA, SQLNET.ORA, and LISTENER.ORA) that must be created by the DBA and loaded on all systems that connect via SQL*Net.
  • SQL*Net Listener: This is the NT service that connects the networking transport services (such as TCP/IP) and the database instance processes. It has to be started with the database services in order to start the Oracle Instance.
  • SQL*DBA: This is an older tool that is still used in Oracle version 7.1. It is a command line with a pull-down menu interface to perform database administrative functions.
  • Instance Manager: This is a GUI-based tool that enables you to check the status of your Oracle instances and start or stop them. I recommend having this as an icon on your desktop to at least enable you to determine if the Oracle Instances are running. You need to have a special database password to start or stop the Oracle Instances (work this out with your database administrator).
  • SQL*Plus: This is a simple command-line interface that lets you get at the Oracle database and its data. It is quick for those who are familiar with Structured Query Language (SQL), but not of much use to others.
  • Oracle Navigator: This is a newer Oracle tool that first came out under Windows 95. It can be used across the network to query several databases and produce nice, tabular displays of the data in Oracle tables. This is good if you need to check on data (or verify that you can access the database) and are not a SQL guru.

Now that I have the definitions out of the way, here are a few tips on running Oracle under Windows NT:

  • If you have the Windows NT Resource Kit and Oracle RDBMS version 7.1, look for the SRVANY utility and read its online documentation. This enables you to start the Oracle database automatically when you start your Windows NT system. For whatever reason, Oracle enables you to start the database services (Instance, in Oracle terms) and the SQL*Net services, but it does not open up the database for general use. To do this, you have to use Instance Manager, SQL*DBA, or one of the other Oracle tools. However, you might be busy and forget to start the Instance manually, or your system might be recovering from a power outage and you don't have an uninterruptible power supply (please say it isn't so). Anyway, with the SRVANY utility you can turn a normal batch file into a service you can set for automatic startup. This batch file should run a SQL script using a utility such as SQL*DBA (setting up this file is covered in the Oracle documentation and might vary between releases, so it is best to look it up in the source for your version of Oracle). The SQL script connects to the Oracle database and issues the database start-up command. Anyway, this little utility can save you a lot of pain when you are hurrying to do maintenance and get the system back into operation. Note that this is not needed in Oracle RDBMS version 7.2, which provides its own startup process.
  • I have had very little luck trying to use SQL*Net NetBEUI across an Ethernet connection. When I switched to SQL*Net TCP/IP, I got significantly faster and more reliable connections.
  • Many of the Oracle memory areas are allocated only when needed. Therefore, if you have a system that works fine under 32MB of memory in your development environment, it might take significantly more memory to make it run for a larger number of users in production. Always err on the conservative side, because NT and Oracle production degrades severely when you start paging.
  • The Oracle tools that run under version 7.1 of Oracle are somewhat strange by my standards. A few of the tools, such as SQL*DBA and Instance Manager, are 32-bit tools and you can run them without setting up SQL*Net on your machine. However, if you want to run most of the other tools, such as SQL*Plus or the Import and Export utilities, you need to install SQL*Net for both the Windows and Windows NT systems on your NT Server. This is because most of these Oracle applications are 16-bit applications. The Oracle database is 32-bit, and Oracle uses SQL*Net to communicate between the 16- and 32-bit worlds. Also remember to load the network configuration files into both the Windows and Windows NT network administration directories to enable these two sides of the world to communicate with one another. However, the version 7.2 Oracle tools eliminate these problems.

This section was actually quite a challenge. You could write an entire guide on Oracle database administration (I know, I did it). It can be a complex beast[md]powerful and a challenge to discipline. However, it is popular and works fairly well in the NT environment. In fact, Oracle has recently announced that it will add NT to the list of operating systems included in the initial wave of releases when they upgrade the product. This family of operating systems includes the more popular UNIX systems and is a sign that Oracle sees NT as a viable (and hence profitable) platform for its operating system. Anyway, this section should provide you with the basics you need to communicate with your database administrator and, combined with previous sections, help you understand what the Oracle database management system is doing to your operating system and server.

Microsoft SQL Server Under Windows NT

Microsoft has its own database management system that it would be glad to sell to you with your Windows NT operating system. It is called Microsoft SQL Server, to differentiate it from Sybase SQL Server, which runs under NT in addition to several flavors of UNIX. Microsoft originally teamed with Sybase to develop this product under Windows NT, but they have recently agreed to go their separate ways on development. Anyway, the Microsoft SQL Server product is something you might have to support on your NT servers and workstations for the following reasons:

  • It is the database that is bundled with and is a prerequisite for several of the other Microsoft BackOffice products, which are discussed in more detail in Chapter 26 (for example, Microsoft Systems Management Server).
  • It is a cost-competitive database for the NT Server environment. For many organizations, it doesn't matter whether you have to select a single database management system that runs on a wide variety of platforms such as UNIX and VMS.
  • Because it is part of Microsoft BackOffice, several third-party developers support it because it comes with their Microsoft Developer Network Level 3 subscriptions.

So what is Microsoft SQL Server like? Although the salespeople might stress the differences between the various products and underlying technologies (such as scalability), I tend to see the similarities. The Microsoft SQL Server architecture is somewhat similar to the Oracle architecture, at least from the operating system administrator's point of view (see Figure 27.14). It has the three key components found in most server database management systems: shared memory areas, background processes, and disk storage files.

Figure 27.14

The basics of the Microsoft SQL Server architecture.

A few of the terms you might want to be familiar with for dealing with Microsoft SQL Server databases include the following:

  • Database: A logical set of data, tables, and other related objects (such as views and indexes) that should share a common function and are stored together within a database device (see device).
  • Device: A Windows NT disk file that stores databases. A database can take up more than one device. SQL Server defines two types of devices: database devices, which store databases, and dump devices, which store backups of databases.
  • SQL Server Engine: The main service providing access to the SQL Server database and devices. This service can be tuned for the number of threads it utilizes.
  • SQL Executive: A service that provides access to replication, task management, event management, and notification services to the users.
  • SQL Enterprise Manager: A tool used to manage your servers and databases. It provides a graphical interface to perform a wide variety of management tasks with SQL Server, including database management, database object management (tables), and event scheduling.
  • ISQL/w: A command-line interface with several GUI utilities that is used to interact with SQL Server (it is somewhat analogous to Oracle SQL*Plus).
  • SQL Performance Monitor: A package that integrates with Windows NT Performance Monitor to enable you to monitor SQL Server similar to the way you monitor other NT resources. This is discussed in more detail in Chapter 19.

Now for a few final notes on dealing with Microsoft SQL Server as your database management system:

  • Allow about an hour for installation of the product. It performs several configuration tasks while setting up your initial databases, and it requires a reboot before you can use SQL Server. This usually needs to be scheduled if you are using a production server.
  • Take advantage of the performance monitoring capabilities that are integrated with those of Windows NT. It is always useful to be comfortable with how to find performance information before you have a performance problem and have a thousand managers breathing down your throat.
  • Try to organize your database devices (files) in a common set of directories (such as \dbdata\production) on all disk drives when you use multiple disk drives for your data. This can make trying to work with problems or expand the system much easier.

Again, Microsoft SQL Server can fill an entire guide by itself. Actually, it has an enormous documentation set stored online that comes with the product. (Some people are still not comfortable working with online documentation, but at least you never leave your copy at home.) The key components of SQL Server were discussed earlier, and you can apply the principles discussed earlier in the guide for dealing with database interactions with the operating system to plan and manage your system more effectively. One side note: I have found that the system administrator is more likely to have to administer a SQL Server database than an Oracle database. That is because Microsoft often integrates its BackOffice system products (such as Systems Management Server) with SQL Server. It is not a formally maintained application such as general ledger, where you have a development group and a support group that contains a database administrator.

Microsoft Access's Jet Database Engine Under Windows NT

A final topic that I want to discuss briefly is the concept of using a simple database file stored on a server to meet modest database needs. Back in the old days (which were not always good), people often stored dBASE files on servers that could be accessed from several PCs on the network. The locking mechanisms were primitive (locking mechanisms prevent one user from overwriting the changes of another user who is accessing the row at the same time), but they met many needs. I wanted to mention that you might consider using something like the Microsoft Access Jet database engine (access to it comes bundled in Visual C++, Visual Basic, and Microsoft Query) to set up one of these simple databases.

If you use this approach, you have to provide a storage area on a shared directory that contains a database file (in Microsoft Access's case, a file with the .mdb extension). People use applications or access tools (such as MS Query) that use the Open Database Connection (ODBC) tool set to access this data file. When you create the ODBC data source, you use a standard browse window to select the .mdb file you want to access. It is simple and does not provide complex transaction logging, recovery, or locking services, but it is relatively simple to implement and might meet many modest user requirements.

A Quick Discussion of ODBC and OLE

One of the concepts you might have to deal with when supporting client/server databases is middleware. Middleware can be defined as the software products that are needed to make the applications talk to the operating system networking utilities. The complicating factor is that each database can support several different sets of middleware, and these products can come from several vendors. The middleware stacks are generally as depicted in Figure 27.15.

Figure 27.15

Middleware and database access.

I typically divide middleware into two components that I refer to as upper middleware and lower middleware. The upper middleware is responsible for interfacing with the database or applications. The lower middleware component is responsible for interfacing with the network drivers. In Oracle's case, you have SQL*Net. For Oracle tools (such as SQL*Forms), SQL*Net provides both upper and lower middleware services. However, if you are writing Visual C++ applications, you need to use something like the Open Database Connection (ODBC) drivers to link your applications to SQL*Net. Confused? There are also ODBC drivers for Oracle (such as Openlink) that integrate the two layers of middleware into one package.

Just when you get comfortable with the concepts behind ODBC, Microsoft comes out with the OLE 2 (Objet Linking and Embedding) utilities that provide an alternative to ODBC. If you were one of the few people to get comfortable with developing for OLE, Microsoft has another database access technology (Database Access Objects, or DAO) that is optimized toward certain types of transactions to the Jet database engine.

Why am I scaring you with all these terms, technologies, and options? What I wanted you to get from this brief discussion of access technologies is an understanding that client/server database environments consist of applications and databases that have been designed to work in client/server environments, along with a set of communications software (middleware) that links everything together. Vendors provide many options for middleware, and you have to take the time to figure out how to connect the various products that are standard in your organization before you field production client/server databases. This can take a fair amount of time and is always best done in a prototype environment before the pressure is on to deliver that system to the field. You might want to try several vendors' products to see which ones work best for the type of database you are deploying. Finally, be prepared for the fact that many middleware errors translate into "something, somewhere is causing some kind of a problem" (in other words, it can take you a bit of time to figure out and correct errors). However, once you get the hang of things, middleware works well and provides a level of service to database users that cannot be beat.

Summary

This chapter was devoted to the use of Windows NT Server to support databases. I covered the ways a database can impact the operating system and how to deal with these effects. Several of the more common database management systems were presented[md]not to make you an expert database administrator, but to give you a feel for a few of the implementations and their basic terminologies. I have found that database management systems can be a challenge even to the capacities of large UNIX, VMS, and mainframe computers. They use a lot of memory and try to use some complex tricks to increase their performance, and this can cause a few headaches for the system administrator. I hope you now understand the concepts behind these effects and know where to look to see what is happening and how to deal with it.

Previous Page Page Top Main Page Next Page



|  About us | Categories | New Releases | Most Popular | Web Tutorial | Free Download | Drivers |



2019 Soft Lookup Corp. Privacy Statement