Oracle Free Tutorial

Web based School

Previous Page Main Page Next Page


15

Performance Tuning and Optimizing

Give a user an inch, and he wants a mile. If you change a database query so that it runs in one minute instead of five, the user will want it to work in 30 seconds. No matter how fast a database runs, there is always the need to make it go faster. Ultimately, this task falls to the DBA. A DBA really has two levels of responsibility: actual and perceived.

Actual responsibility means the tasks for which a DBA is genuinely responsible: keeping the database available for day-to-day business needs, creating new user accounts, monitoring the overall health of the database, and so on. Perceived responsibility means the responsibility incurred when there is any problem with the database—or even a conflict in the corporate IS structure. A DBA is often asked why the database is down when a link has broken in the WAN, or why the database is performing slow when a poorly written application is deployed into a production environment.

Because all database problems are perceived to be the responsibility of the DBA, it falls to him—whether he likes it or not—to validate the claims or dispel the rumors. The DBA must have a solid foundation of knowledge to base his decisions on. In many larger IS departments, the DBA may not be responsible for performance tuning. In others, the DBA may be responsible only for database—but not application—performance tuning. At some sites, the DBA is responsible for all performance tuning functions of the database.

This chapter deals with the art of performance tuning.


For more information about performance considerations while designing a database, see Chapter 17, "Designing a Database.";

General Concepts in Database Tuning

When you are called on to optimize or tune a system, it is of paramount importance that you distinguish between the two levels of performance tuning: applications tuning and database tuning. They are distinct areas of expertise and are often handled by different people. The DBA should have at least an overview of the importance and functions of each type of tuning.

At the base of everything is the operating system, which drives the physical functionality—such as how to access the physical disk devices. On top of this level rests the RDBMS, which interacts with the operating system to store information physically. Applications communicate with the RDBMS to perform business tasks.

Applications Tuning

Applications tuning deals with how the various applications—forms, reports, and so on—are put together to interact with the database. Previous chapters discussed how a database is little more than a series of physical data files. Essentially, an application is nothing more than a program that issues calls to the database, which in turn are interpreted as physical reads and writes from the physical data files. Applications tuning means controlling the frequency and amount of data that the application requests from or sends to the database.

Here are some general guidelines for tuning applications:

  • Generate an EXPLAIN PLAN on all the queries in the application. This helps you determine whether a query has been properly optimized. The EXPLAIN PLAN is discussed later in this chapter.

  • Check the EXPLAIN PLAN of database views. This is important because views are indistinguishable from tables when they are used in queries. Because the SQL for a view is not executed until it is queried, an inefficient view can drastically slow down the performance of an otherwise efficient application. Be especially wary of joining views with other views.

  • If an application that was performing acceptably begins to perform slowly, stop and determine what has changed. In many cases, queries run fine in test environments and in the first few months of production until data accumulates; an index might now be needed to expedite the database searches. In other cases, however, an index that invalidates existing EXPLAIN PLANs might have been added. This is a real danger when too many people can create indexes on production tables. The more indexes that a table has, the longer it takes to load or change data in a database table; it also impacts the speed with which the database returns query results.

  • Match SQL where possible. Applications should use the same SQL statements wherever possible to take advantage of Oracle's Shared SQL Area. The SQL must match exactly to take advantage of this.

  • Be as specific as possible. The more specific a database query is, the faster a query executes. For example, querying a table by a ROWID is far more specific than querying with the LIKE clause. Unless it is necessary to use less specific queries in an application, always write queries that can use the PRIMARY KEY or other indexed information.

  • Be aware of how often queries are made against the database and whether they are necessary. Avoid too frequent or unnecessary calls, such calling a loop that initially queries the DUAL table for the name of the user. Each time the loop executes, the query is executed. Other types of queries are even more expensive. Whenever possible, process data in memory and refrain from querying the database.

  • SQL is not a file handler. One of the most common mistakes in SQL programming is made by people who have previous programming experience using file handlers, such as BTRIEVE or ISAM. Software developers should be wary of writing two separate queries for master/detail relationships—that is, one query for the master and another for the details for that master—instead of just a single query. They involve extra processing overhead that can have a substantial overhead for applications programs.

  • Tuning does not solve the problems of poor design. This is the most essential truth in applications tuning. It emphasizes what everyone who has ever worked in systems development knows: Spend time proactively, not reactively. No matter how many indexes are created, how much optimization is done to queries, or how many caches and buffers are tweaked and tuned—if the design of a database is faulty, the performance of the overall system suffers.

These are only guidelines for applications tuning. Each site has its own specific problems and issues that affect the problems that occur in applications. More often than not, it is the duty of the developers to tune and modify their own programs without the involvement of the DBA. Because of perceived responsibility, however, the DBA must work with the applications development staff to resolve these problems.

Database Tuning

Whereas applications development addresses how a task is accomplished, tuning at the database level is more of a nuts and bolts affair. Performance tuning at the applications level relies on a methodical approach to isolating potential areas to improve. Tuning at the database level, however, is more hit and miss. It concentrates on things such as enlarging database buffers and caches by increasing INIT.ORA parameters or balancing database files to achieve optimum throughput.

Unlike applications tuning, which can be done by an applications group or the DBA depending on the environment, database tuning is the almost exclusive province of the DBA. Only in rare cases where there are multiple DBA groups, one of which specializes in performance tuning, does database tuning fall outside the domain of the DBA.

At the database level, there are three kinds of tuning:

  • Memory tuning

  • I/O tuning

  • Contention tuning

Each kind has a distinct set of areas that the DBA must examine. Memory tuning deals with optimizing the numerous caches, buffers, and shared pools that reside in memory and compose the core memory structures for the Oracle RDBMS. I/O tuning is concerned with maximizing the speed and efficiency with which the RDBMS accesses the physical data files that make up its basic storage units. Contention tuning seeks to resolve problems in which the database fights against itself for database resources.

There are only four basic steps involved in database tuning. They hold true for all three types of tuning:

  1. Gather information.

  2. Determine optimal changes.

  3. Implement changes.

  4. Monitor the database.

As with applications tuning, the more proactively the process is done, the more effective it is. The process is seldom effective when it is done on the fly or without the proper amount of research.

Operating System Tuning

Tuning at the operating system level is beyond the scope of this chapter. This task falls to the system administrator—only in rare cases to the DBA. However, it is often the role of the DBA to offer suggestions. Some issues to consider are

  • Paging and swapping. At the operating system level, paging and swapping is used to transfer information from the system's memory (RAM) to disk and back again. This enables the system to manipulate more information than it normally could handle in real memory. However, excessive paging and swapping can cause system performance to degrade. The DBA and the system administrator should work together to optimize memory to reduce or eliminate paging and swapping.

  • Stripping, mirroring, and RAID. In many cases, the disk drives write a piece of information across several disks (striping), write all the information across a pair of disks (mirroring), or writing all the information across every disk in a fixed-number set (RAID). These disk drive configurations can help make I/O more efficient by distributing reads and writes across many disks. In some cases, they increase the fault-tolerance of the system itself. It is important for the DBA to be aware that many of these configurations have an impact on I/O performance. RAID drives, for example, must access each disk in the set for every read and write operation.

  • Shared memory. Used for communication between processes, shared memory settings are usually configured within the operating system. The Oracle Installation and Configuration Guide gives the minimum settings for configuring shared memory for an Oracle RDBMS. These settings are the minimum required for running the RDBMS. In practice, they should generally be set higher.

  • Maximum processes. One of the drawbacks of Oracle (or any other RDBMS) is the amount of overhead that required from the system on which it runs. One of the areas that Oracle takes extended overhead is in processes. The database itself consists of background process—PMON, SMON, LGWR, DBWR, and so on—plus an additional processes for each user who connects to the database. Although this value can be limited at the database level through the PROCESSES parameter in the INIT.ORA parameter file, it is important to make certain that the operating system supports the number of processes. It is also important to ensure that the number of available processes allows for growth.

  • Maximum open files. A particular problem in an Oracle RDBMS is the maximum number of open files that a single process can hold. The number of files is defined at the operating system and RDBMS levels. The RDBMS sets this limit with the MAXDATAFILES parameter of the create database statement. There is also a limit at operating system level, which depends on the configuration of the operating system. In both cases, the DBA can change the value.


It is not advisable to change the operating system priority of the Oracle background processes. If these values are altered, the database might process information less efficiently. If you must modify them, set all database processes to the same value.

Performance Tools

In tuning a database, the first and most crucial step is gathering statistics on the current database performance. These tools give a benchmark of how the database is currently performing and enable the DBA to gauge progress by measuring improvement.

Viewing SGA and Parameter Settings

Use the Oracle Server*Manager to view current parameter settings for an Oracle RDBMS instance. The show sga command shows the current size and makeup of the SGA. You can also display the INIT.ORA parameters with the show parameter command. To display only a particular parameter, add it to the command. For example,

% svrmgrl

SVRMGR> Connect internal

Connected.

SVRMGR> show parameter block

All the database parameters are shown, even ones that have not been explicitly set in the INIT.ORA parameter file. Parameters that the DBA has not set are shown with their default values. By spooling this list to a data file, the DBA can get an accurate snapshot of a database's settings.

UTLBSTAT and UTLESTAT

To determine what needs to be fixed in an Oracle RDBMS instance, you must first determine what is broken. In some cases, performance problems occur sporadically; however, they are usually have a specific pattern. Do they occur around lunch time? At night? Early in the morning? One of the keys to performing successful performance tuning is being able to identify when the problem is occurring.

Oracle provides tools that enable you to examine in detail what the Oracle RDBMS was doing during a specific period of time. They are the begin statistics utility (utlbstat) and the end statistics utility (utlestat). These scripts enable you to take a snapshot of how the instance was performing during an interval of time. They use the Oracle dynamic performance (V$) tables to gather information.


It is important to use the utlbstat and utlestat utilities only against a database instance that has been running for a while. Because an Oracle RDBMS instance reinitializes its dynamic performance table during database startup, information gathered from a database that has not been running and had time to gather information is inconclusive.

To use utlbstat and utlestat, the database must have been started with the value of TIMED_STATISTICS in the INIT.ORA parameter file set to TRUE. Oracle does not collect some of the information required for the report if this parameter is not set to TRUE. Setting TIMED_STATISTICS to TRUE, however, causes the database instance to incur overhead. The amount is small—only about 4-8 percent in quantitative terms—and it is necessary to take an accurate snapshot of the database performance. Many DBAs set this parameter to TRUE only when they gather statistics.

Once you have set the required parameters, the database has run for a sufficient period of time, and you have identified the window, you take the snapshot by using utlbstat. To execute either script, you must have the ability to connect internal to the database. Running utlbstat tells the RDBMS instance to begin gathering statistics until told otherwise. It is executed as follows:

% svrmgrl

SVRMGR> @$ORACLE_HOME/rdbms/admin/utlbstat

From the moment when this script is executed, the Oracle RDBMS instance gathers performance statistics. It continues to do so until you run the utlestat script, which stops gathering performance statistics. It is important that the database remain active and not be shut down while utlbstat is running.

% svrmgrl

SVRMGR> @$ORACLE_HOME/rdbms/admin/utlestat

When you run utlestat, the database creates a report called REPORT.TXT in the current directory, which contains the statistical information gathered. Each report contains the following information:

  • Library cache statistics

  • System summary statistics

  • System-wide wait event statistics

  • The average length of the dirty buffer write queue

  • File I/O statistics

  • SGA and cache statistics

  • Latch statistics

  • Rollback segment statistics

  • Current initialization parameter settings

  • Dictionary cache statistics

  • Start and stop time statistics

A sample report called REPORT.TXT is included on the CD-ROM and shows what a report produced by utlestat might look like.

Generating the report is simple; interpreting it is another matter entirely. The rest of this chapter looks at what this information means. The report itself gives some brief hints. When in doubt, always remember to keep hit rates high and wait times low.

EXPLAIN PLAN

Performance tuning does not always have to happen on a global, database-level view. In theory, most tuning should take place at much lower, scalable levels where the performance impact is more easily measured. A fundamental truth of database tuning and optimization is that performance tuning is not sorcery or magic. Optimizing a database will not make a poorly tuned application run faster; the reverse is also true, though less common. It is important to examine how the database handles processing at the application, or SQL, level.

To do this, Oracle provides a tool in the form of the EXPLAIN PLAN, which enables the DBA to pass a SQL statement through the Oracle optimizer and learn how the statement will be executed by the database—the execution plan. That way, it is possible to learn whether the database is performing as expected—for example, whether it uses an index on a table instead of scanning the entire database table.

Several factors can affect the results returned by an EXPLAIN PLAN. They include

  • Changes in statistics when running the database under the Cost-Based Optimizer

  • The use of HINTS under the Rule-Based Optimizer that cause the query to select a particular execution path

  • The addition or deletion of new indexes on one of the tables in the SQL statement when running the database under the Rule-Based Optimizer

  • Subtle changes in the WHERE or FROM clause of a SQL SELECT statement when running the database under the Rule-Based Optimizer

  • The presence of database objects with the same name as the object being referenced in the schema of the user executing the query

It is important to understand that the results of an EXPLAIN PLAN are, therefore, by no means fixed and finite. The DBA must be aware of changes made to database objects—such as adding new indexes—and how fast the tables are growing.

The Oracle RDBMS uses the EXPLAIN PLAN by storing information about how a query is executing in a table within the user's schema. The table must exist for the EXPLAIN PLAN to work. To create the table, the user must execute the following script. Of course, he must have the CREATE TABLE and RESOURCE or quota privileges on his default tablespace.

% svrmgrl

SVRMGR> connect scott/tiger

Connected.

SVRMGR> @$ORACLE_HOME/rdbms/admin/utlxplan.sql

Statement Processed.

Once the table has been created, an EXPLAIN PLAN can be generated from a query by prefacing the query with the command to perform an EXPLAIN PLAN. The following script shows how to format a query for an EXPLAIN PLAN:

CONNECT /

EXPLAIN PLAN

SET STATEMENT_ID = 'QUERY1'

INTO PLAN_TABLE FOR

SELECT O.ORDER_DATE, O.ORDERNO, O.PARTNO, P.PART_DESC, O.QTY

FROM ORDER O, PART P

WHERE O.PARTNO = P.PARTNO

Note the SET STATEMENT and INTO clauses of the EXPLAIN PLAN. The value of SET STATEMENT is used to make the execution of the EXPLAIN PLAN stored within the table unique; it can be virtually any string up to 30 characters in length. Specifying a table in the INTO clause, on the other hand, tells the EXPLAIN PLAN where to place information about the query execution. In the previous example, the execution of the query is identified as QUERY1 and has its information stored in the table PLAN_TABLE.

Now that the EXPLAIN PLAN has loaded the table with information, there is the obvious question of how to retrieve and interpret the information provided. Oracle provides a script in the Oracle7 Server Utilities Guide that displays information in a tree-like fashion. It is

SELECT LPAD(' ', 2*(LEVEL-1))||operation||' '||

       options, object_name ÒQUERY PLANÓ

FROM plan_table

START WITH id = 0 AND statement_id = 'QUERY1'

CONNECT BY PRIOR id = parent_id

/

By running a SQL query through the EXPLAIN PLAN, a pseudo-graph similar to the following is produced:

QUERY PLAN

------------------------------------------------------------------------------

  SORT                            ORDER BY

    NESTED LOOPS

      FILTER

        NESTED LOOPS                    OUTER

          TABLE ACCESS                    FULL                   HEADER

          TABLE ACCESS                    BY ROWID               DETAIL

            INDEX                           RANGE SCAN           DETAIL_PK

            INDEX                           RANGE SCAN           DETAIL_PK

          TABLE ACCESS                    FULL                   HEADER

          TABLE ACCESS                    BY ROWID               DETAIL

            INDEX                           RANGE SCAN           DETAIL_PK

            INDEX                           RANGE SCAN           DETAIL_PK

        NESTED LOOPS                    OUTER

          TABLE ACCESS                    FULL                   HEADER

          TABLE ACCESS                    BY ROWID               DETAIL

            INDEX                           RANGE SCAN           DETAIL_PK

            INDEX                           RANGE SCAN           DETAIL_PK

          TABLE ACCESS                    FULL                   HEADER

          TABLE ACCESS                    BY ROWID               DETAIL

            INDEX                           RANGE SCAN           DETAIL_PK

            INDEX                           RANGE SCAN           DETAIL_PK

      TABLE ACCESS                    BY ROWID                   DETAIL

        INDEX                           RANGE SCAN               DETAIL_PK

        INDEX                           RANGE SCAN               DETAIL_PK

      FILTER

          TABLE ACCESS                    FULL                   HEADER

          TABLE ACCESS                    BY ROWID               DETAIL

            INDEX                           RANGE SCAN           DETAIL_PK

            INDEX                           RANGE SCAN           DETAIL_PK

          TABLE ACCESS                    FULL                   HEADER

          TABLE ACCESS                    BY ROWID               DETAIL

            INDEX                           RANGE SCAN           DETAIL_PK

            INDEX                           RANGE SCAN           DETAIL_PK

        NESTED LOOPS                    OUTER

          TABLE ACCESS                    BY ROWID               DETAIL

            INDEX                           RANGE SCAN           DETAIL_PK

            INDEX                           RANGE SCAN           DETAIL_PK

          TABLE ACCESS                    FULL                   HEADER

          TABLE ACCESS                    BY ROWID               DETAIL

            INDEX                           RANGE SCAN           DETAIL_PK

            INDEX                           RANGE SCAN           DETAIL_PK

When you interpret the output, it is important to understand that all operations, as reported by the EXPLAIN PLAN, are basically operation/option combinations. There is no way to discuss all these combinations or the possible interpretations of all the EXPLAIN PLAN scenarios. As with many aspects of the IS industry—especially relational databases—the only true teacher is experience. However, here are some of the more common operation/option pairs that EXPLAIN PLANs returns:

FILTER

Eliminates rows from a table by conditions specified in the WHERE clause of a SQL statement

INDEX/RANGE SCAN

Accesses information in the table via a non-unique index (specified in the object_name column)

INDEX/UNIQUE

Accesses information in the table via a unique or primary key index (specified in the object_name column)

MERGE/JOIN

Combines two sorted lists of data into a single, sorted list; used on multi-table queries

SORT/GROUP BY

Sorts table data as specified in a GROUP BY clause of the SQL statement

SORT/JOIN

Performs a sort on the data from the tables before a MERGE JOIN operation

SORT/ORDER BY

Sorts table data as specified in an ORDER BY clause of a SQL statement

SORT/UNIQUE

Performs a sort on table data being returned and eliminates duplicate rows

TABLE ACCESS/FULL

Performs a full scan of the database table to locate and return required data

TABLE ACCESS/ROWID

Locates a row in a database table by using its unique ROWID

VIEW

Returns information from a database view

The EXPLAIN PLAN is a powerful tool for software developers because it enables them to ensure that their queries are properly tuned. Of course, changes made to database objects can adversely affect the results of the EXPLAIN PLAN, but they are useful in determining where the performance drains on an application will occur.

SQL*Trace and TKPROF

Oracle SQL*Trace and EXPLAIN PLAN are similar in that they are both used to do performance tuning at the application level and that they both show the manner in which the Oracle RDBMS executes a query. Unlike the EXPLAIN PLAN, which simply shows how the database optimizer chooses to execute a query to return specified information, SQL*Trace reveals the quantitative numbers behind the SQL execution. In addition to an execution plan, SQL*Trace generates factors such as CPU and disk resources, in addition to an execution plan. This is often considered a lower-level view of how a database query is performing, for it shows factors at both the operating system and RDBMS levels.

To use SQL*Trace, you must first set some parameters in the INIT.ORA parameter file:

MAX_DUMP_FILE_SIZE

Denotes the maximum size for an Oracle-generated file. This value is the number in operating system blocks (which may differ from the size in database blocks).

SQL_TRACE

Causes a trace file to be written for every user who connects to the database when it is set to TRUE. Because of disk space requirements and database overhead, it should be used judiciously.

TIMED_STATISTICS

Causes the database to gather database statistics when this value is set to TRUE. It causes overhead of 4-8 percent.

USER_DUMP_DEST

The directory path where trace files will be written.

Once you have set the INIT.ORA parameters have been set, you can invoke the SQL*Trace utility manually. If the SQL_TRACE parameter is set, it is not necessary to invoke SQL*Trace manually because a trace file will be written automatically; however, it is more common to call it manually. To invoke SQL*Trace, use either SQL or PL/SQL.

Use SQL when there is specific query to be analyzed. For example,

% sqlplus

SQL> ALTER SESSION SET SQL_TRACE = TRUE;

SQL> @/tmp/enter_your_query.sql

SQL> ALTER SESSION SET SQL_TRACE = FALSE;

SQL> EXIT

You can either type in the query at the SQL prompt or source it in from an external file that contains the query.

In many cases, especially through applications such as SQL*Forms, it is necessary to invoke the trace facility by using PL/SQL. This is especially helpful when you are dealing with a third-party application for which the SQL syntax is not readily obvious. To invoke SQL*Trace, use the following PL/SQL statement:

BEGIN

   DBMS_SESSION.SET_SQL_TRACE (TRUE);

   /* PL/SQL code goes here */

As with SQL*Plus, the trace gathers information until the session disconnects or is deactivated.

   /* PL/SQL code goes here */

   DBMS_SESSION.SET_SQL_TRACE (FALSE);

END;

After the trace file has been generated, it must be converted into a readable format. Oracle provides the TKPROF utility to accomplish this task. Using TKPROF, you can convert the raw trace file into a readable report.


Locating a trace file in the dump directory can be quite a task, especially if many other files exist. Two tricks speed this process. The first is to use the UNIX command ls -lt to list the files in date order, with the newest file listed first. The other option is to use a SELECT USERNAME FROM DUAL as part of the trace and issue a grep USERNAME *.trc to find the trace file.

Once the trace file has been located, it is necessary to run the TKPROF utility against it to produce readable output. This information is statistical and shows how queries perform at the database and operating system level. The report produced by TKPROF contains CPU usage, disk utilization, and the count of rows returned by the query (or queries) enclosed in the trace file output. You can also have TKPROF return EXPLAIN PLAN information from each query in the trace. TKPROF is invoked as follows:

% tkprof ora_4952.trc ora_4952.log

This statement takes the trace output from the ORA_4952.TRC SQL*Trace file and generates its output in the file named ORA_4952.LOG. This particular statement does not generate an EXPLAIN PLAN for any of the queries contained in the trace file. Supplemental options enable you to control a certain extent or the information that is produced. They are

EXPLAIN

Enables you to specify a username and password that will generate an EXPLAIN PLAN for each query TKPROF analyzes

INSERT

Specifies where to dump both the SQL statements in the trace file and the data contained in the insert statements

PRINT

Designates the number of queries in the trace file to examine—especially useful for trace files that contain many SQL statements

RECORD

Enables you to specify an output file that will contain all the statements in the trace file

SORT

Enables you to control the order in which the analyzed queries are displayed

SYS

Indicates whether to include queries run against the SYS tables (the data dictionary) in the trace output

TABLE

Specifies the schema.tablename to use when generating a report with the EXPLAIN option

When you run the trace file through TKPROF, it generates a report. For example,

************************************************************

select o.ordid, p.partid, o.qty, p.cost, (o.qty * p.cost)

from part p, order o

where o.partid = p.partid

call      count   cpu    elapsed  disk  query  current  rows

-------   -----  ----   --------  ----  -----  -------  ----

Parse         1  0.02       0.02     0      0        0     0

Execute       1  0.00       0.00     0      0        0     0

Fetch         4  0.03       0.03     1     20       10    50

-------   -----  ----   --------  ----  -----  -------  ----

total         6  0.05       0.05     1     20       10    50

Misses in library cache during parse: 0

Misses in library cache during execute: 1

Optimizer hint: CHOOSE

Parsing user id: 22  (MERLIN)

************************************************************

As with interpreting the utlbstat/utlestat report and the EXPLAIN PLAN, interpreting the results produced by TKPROF and SQL*Trace is more art than science. The following guidelines are helpful:

  • The totals for the query and current columns represent the logical I/O (in blocks) accumulated by the query. Compare them against the total of the rows column. Except in special cases, there should be a few logical I/O blocks versus a large number of returned rows. The optimal ratio is approximately 2:1.

  • In most cases, the execute value should be higher than parse value. If the number of parses is the same as the number of executions, you should consider increasing the size of the shared pool.

  • Compare the amount of physical I/O (total of the disk column) to the amount of logical I/O (sums of the totals of the query and current columns). The logical I/O should be much higher than the physical I/O.

Dynamic Performance (V$) Tables

Another useful tool for database tuning is the dynamic performance tables, also called the v$ tables (which are really views despite this name). The v$ views are views on the Oracle x$ tables, which are SGA-held memory structures created by the database at startup. These tables—and their views—are updated in real time as the database runs, and provide the DBA a good view of the current status of the database. Several third-party applications use the v$ tables to access statistical or performance monitoring data, and the views are used by the monitor component of the Oracle Server*Manager. After database creation, the v$ tables can be accessed only by the user SYS, who can make grants on them to other users.


Oracle Corporation has stated that it is not committed to supporting the v$ tables in future product releases. Although that would be unlikely, given the number of market products that use these views, be aware of the possibility.

The v$ views are useful in many applications, such as backup and recovery, administrative monitoring, and performance tuning. Here are some of the more commonly used views as they apply to performance tuning:

V$DB_OBJECT_CACHE

Contains information about all the database objects that currently exist in the library cache of the SGA

V$FILESTAT

Contains the amount of physical reads and writes taking place on a specific data file associated with the database

V$LATCH

Contains a current statistical picture of all the latches within the database

V$LATCHHOLDER

Contains the name of the current latchholder of the each latch specified in V$LATCH

V$LATCHNAME

Contains the name of each latch in the V$LATCH view

V$LIBRARYCACHE

Contains statistics that represent the overall performance of the library cache area of the SGA

V$ROLLSTAT

Contains statistics on all the online rollback segments in the database

V$ROLLCACHE

Contains statistical information about the performance of the data dictionary cache of the SGA

V$SESSION_WAIT

Provides information on what sessions are waiting for other sessions if one session is waiting for another to complete a task or event

V$SESSTAT

Contains current statistical information for each active database session

V$SESS_IO

Contains current logical and physical I/O information for each active database session

V$SGASTAT

Summarizes statistical information on the overall SGA

V$SQLAREA

Contains statistical information on the cursor cache of the SGA

V$STATNAME

Contains the names of all the statistics from V$SESSTAT

V$WAITSTAT

Contains information on block contention—active only when TIMED_STATISTICS is set to TRUE


Because statistics are gathered over a period of time, a database that has just started up does not have sufficient statistics to provide any sort of tuning information. Statistics gathered from a newly started database are misleading and cannot pinpoint actual performance problems. As a rule of thumb, a database should run for several hours before you gather any performance statistics.

Here is a query that uses the V$ views. It displays the name and current value of each database statistic. It is useful for quickly seeing how the database is performing.

select n.statistic# , n.name , s.value

from v$statname n , v$sysstat s

where n.statistic# = s.statistic#

and value > 0

/

There are many more V$ views that are not mentioned here. Many gather I/O, cache, and buffering statistics that are invaluable for performance tuning. Consult the Oracle7 Server Administrator's Guide and the Oracle7 Applications Developers Guide for more information on these views.

SQL*DBA—the predecessor to Oracle Server*Manager—is being made obsolete in mid-1996. It contains several scripts that the monitor utility uses. Located in the $ORACLE_HOME/rdbms/sqldba directory, they give you insight into how to use some of the V$ views. They also show how many of the V$ views have relationships with other V$ views.

Tuning Database SGA

Problems with database applications often involve memory and disk drives. When the CPU runs faster than the throughput of the input/output devices (such as the disk drives), the system is called I/O bound. When the throughput of the input/output devices is faster than the CPU, the system is called CPU bound.

Most systems are I/O bound, so it is easy to dismiss poor performance as a by-product of poor throughput transfer. Many DBAs, therefore, perform load balancing and contention analysis to optimize performance. What is forgotten, however, is that poor memory management can often aggravate many performance problems. Poor use of the available memory can contribute to throughput that is less than superior. For example, sorts that could be optimized to run in memory run to disk, or the operating system pages and swaps processes to disk.

It is important, therefore, to understand the single most memory- intensive part of the Oracle RDBMS—the system global area (SGA). By definition, the SGA is simply a combination of buffers and caches stored in virtual memory that enables the database to function. To be efficient, Oracle performs many of its operations in memory, writing to disk only in bulk so as to optimize performance hits. This is good because from the standpoint of software development97Äaccessing the disk drive is ÒexpensiveÓ in terms of performance cost, whereas running a process in memory is Òinexpensive.Ó

As Figure 15.1 shows, the SGA is composed of three primary units:

  • The redo log buffer

  • The database buffer cache

  • The shared SQL area (the shared pool)


Figure 15.1. Architecture of the Oracle SGA.

It is important to ensure that the SGA is large enough to fit comfortably into the system's existing memory. It should also be small enough to coexist with other applications and not allocate more memory than it requires. It is equally important to make certain that there is enough shared memory—semaphores—available to support the database instance. Like all other aspects of performance tuning, memory management means balancing available resources against needed resources and reaching an effective compromise.

Examining the Current SGA

To tune the SGA of a database instance, you must determine the current size of the SGA size. There are several ways to do this, including extracting the information from the DBA views or V$ tables or calculating it based on values in the INIT.ORA parameter file. The simplest method, however, is to issue the show sga command from Oracle Server*Manager. For example,

% svrmgrl

SVRMGR> connect internal

Connected.

SVRMGR> show sga

Total System Global Area     95243632 bytes

Fixed Size                      46384 bytes

Variable Size                70588480 bytes

Database Buffers             24576000 bytes

Redo Buffers                    32768 bytes

The size of the SGA remains constant as long as the database is running, although the DBA can change it when the database is restarted.

Changing the SGA Size

The sum of the parts equals the whole. Sizing the SGA is no exception. Changing the size of the SGA requires that values of some INIT.ORA parameters be modified, which in turn alters the overall size of the SGA. The following parameters control the size of the SGA:

DB_BLOCK_BUFFERS

The number of database blocks (of size DB_BLOCK_SIZE) allocated to the database buffer cache

LOG_BUFFER

The size (in bytes) of the redo log buffer

SHARED_POOL_SIZE

The size (in bytes) of the Shared SQL area


Some of the parameters require their arguments in bytes, whereas others require their arguments in blocks.

Once the size of a SGA buffer is set, the size of the SGA remains constant as long as the database continues to run. If the values of the these three parameters are changed and the database is restarted, they immediately take effect. You should make a backup of the INIT.ORA parameter file before you make considerable changes. For example:

#

# SGA Size Parameters

#

# each database block is set to 8192 (8K) bytes

db_block_size = 8192

# buffer is 25MB (8192 bytes x 3200 blocks)

db_block_buffers = 3200

# buffer is 32K

log_buffer = 32768

# buffer is 50M

shared_pool_size = 52428800

You should also ensure that these values are always set appropriately high, but not inordinately high. The ramifications of changing the size of the SGA are discussed later in this chapter.

Database Block Size

The "building blocks" of any database are the size of its blocks. You set this value with the DB_BLOCK_SIZE parameter, and its range is operating system-specific—approximately 512 bytes to 16M.

This value represents how data "pieces" are transferred to and from the instance's SGA during an operation. The more data that the database can transfer in a single operation, the fewer operations that it has to perform; consequently, the overall performance of the instance improves. The value of DB_BLOCK_SIZE should be a multiple of the operating system block size. On some systems, the default operating system block size is sufficient. On other systems, the best speed is twice that value. The best way to determine this is to generate a test instance; use different sizes of blocks and conduct benchmark testing. Always keep in mind the limits imposed by the operating system when you do this. As with all other areas of performance tuning, a trade-off occurs—setting the size of the block too high can actually degrade the performance.

Once a database is created, the only way to change the value of DB_BLOCK_SIZE is to recreate the database. This makes sense. Whenever a database instance is created, Oracle physically allocates several database files of size X in which it will store various forms of information—the data dictionary, tables, indexes, and so on. These files are created with blocks of size DB_BLOCK_SIZE and are mapped so that the database can recognize each one. If the value of DB_BLOCK_SIZE is changed, the blocks no longer begin and end where the database expects. The RDBMS cannot correctly manipulate data if it cannot recognize the blocks.

To change the size of the blocks:

  1. Perform a full cold backup of the database.

  2. Use the exp utility to do a full database export on the database instance as the user SYS.

  3. Remove all the physical data files.

  4. Recreate the database with the new DB_BLOCK_SIZE.

  5. Use the imp utility to import the data from the original database instance into the new database instance as the user SYS.


When you create a database with a new DB_BLOCK_SIZE, examine each INIT.ORA parameter for possible ramifications. Many parameters are allocated in database blocks; increasing the size of each block might adversely affect the performance of the database.

This process is time-consuming and should be done only if the performance increase will be significant.

Database Buffer Cache

The database buffer cache is the memory buffer within the SGA that holds copies of data that has been read and often changed from the physical database files. There are as many buffers in this buffer cache as the value of DB_BLOCK_BUFFERS. They include

Dirty buffers

Buffers that have been changed but not written back to disk

Pinned buffers

Buffers that are currently being accessed

Free buffers

Buffers that are available for use

Because it is desirable to have Oracle work within the SGA memory area as much as possible, the hit rate within the database buffer cache should be very high—greater than 70 percent. To determine the rate, execute the following query on the database buffer cache:

select name, value

from v$sysstat

where name in ('consistent gets', 'db block gets', 'physical reads')

/

The query returns three values, which you can plug into the following mathematical formula to obtain the current database buffer cache hit ratio:

hit ratio = 1 - (physical reads / (db block gets + consistent gets) )

If the hit ratio returned is less than 70 percent, you should seriously consider raising the number of blocks allocated to the database buffer cache of the SGA. To do that, increase the value of the INIT.ORA parameter DB_BLOCK_BUFFERS.

Shared Pool Size

The SGA shared pool area is composed primarily of two entities: the shared SQL cache and the data dictionary cache. Each one serves a distinct function. The shared SQL cache is used to retain previously executed queries, procedures, and other SQL-based operations in the SGA. Thus, frequently-executed SQL statements reside in memory and do not have to be reparsed by the database before each execution. The data dictionary cache contains calls made to the data dictionary, which must be done before every single action in the database. In previous versions of Oracle, the data dictionary cache had individually-tunable parameters, but they are now encompassed under the shared pool.

As with the database buffer cache, the efficiency of the shared pool cache is determined by a hit ratio that indicates how often the Oracle RDBMS can process information in memory and how often it must retrieve information from disk. The database should work as much from memory as possible without going to disk. Although that is not always practical, you should examine the various caches to ensure that their values are in acceptable ranges.

The following script compares the number of pins (how often an item was executed) to the number of reloads (how often a miss occurred):

select sum(pins) pins, sum(reloads) reloads

from v$librarycache

/

Use the following formula to determine the ratio of reloads to pins. If the result is 1 or greater, you need to tune the shared SQL area by increasing the size of the shared pool.

ratio = (reloads / pins) * 100

Similarly, the data dictionary cache determines how often the RDBMS goes to disk when it accesses information on users, privileges, tables, indexes, and so on. Most database systems reuse the same database objects repeatedly. Therefore, if a high degree of disk access takes place for operations that run the same programs, the information is likely being aged out too often. The same rule holds true for the other shared pool areas.

The following code segment enables the DBA or the user to retrieve the number of gets (information requests on an object) and getmisses (cached or missed queries).

select sum(gets) gets, sum(getmisses) getmisses

from v$rowcache

/

The formula for the ratio of gets to getmisses is

ratio = ( getmisses / gets) * 100

If the ratio is greater than 10 percent, you should consider increasing the value of the SHARED_POOL_SIZE parameter. It is usually a good idea to have a large shared pool. In a few cases, however, this can adversely affect the database.

Sort Area Size

During a single day, a database instance performs many operations that involve sorting. They include everything from an explicit command to sort (such as the SQL ORDER BY or GROUP BY option) to an implicit command (such as creating an index to a database table). Working in memory is faster than working on disk, and sorting is no exception.

Whenever an operation is undertaken that requires sorting, Oracle attempts to do it in the memory of the user process that requests the sort. Sorts are constrained by the following INIT.ORA parameters:

SORT_AREA_SIZE

The maximum amount of space (in bytes) that a user process has available to perform a sort

SORT_AREA_SIZE_RETAINED

The minimum amount of space (in bytes) that a user process will ever have available

Exceeding SORT_AREA_SIZE causes a sort to disk to occur.

To determine whether a sort is performing efficiently, you must first determine the level—memory or disk—at which the sort occurs. For example,

select name, value

from v$sysstat

where name like 'sort%'

/

produces output similar to

NAME                                                 VALUE

--------------------------------------------------   -----

sorts (memory)                                         370

sorts (disk)                                             7

sorts (rows)                                          1997

Interpreting the output from the sort statistics is not as simple as calculating a hit ratio. Obviously, the lower the value of the sorts to disk, the better the sort is performing. However, many sorts to disk does not necessarily mean that the database is not sorting optimally. You should consider whether you can safely raise the value of SORT_AREA_SIZE without causing an adverse impact on the database. Likewise, these might be batch jobs, which process an inordinate amount of data. Because of the volume of data processed, it is impossible to increase the SORT_AREA_SIZE large enough to eliminate these sorts to disk.

When you deal with sorts, it is as important to know why certain results occur as it is to know that the results do occur. Watched diligently and with a knowledge about current operations, the sorts on a database are low-maintenance items.

Ramifications of SGA Changes

It is relatively easy to change the size of the buffers in the SGA, but you must consider the ramifications of making changes.

The most obvious benefit of increasing the size of the SGA is that the larger the SGA, the more information can be processed in memory. By enabling the database to have most of its data cached, physical disk I/O is minimized, which results in a system that is constrained more by the speed of the processor than by the speed of the I/O devices. The law of diminishing marginal utility applies, however. Depending on the size of the database and the amount of activity being performed, increasing the size of the SGA buffers ceases to have any positive effect after a certain point. Once this occurs, the database begins to hoard memory that could be better used by the operating system or other applications.

Another concern in tuning a database SGA is failing to consider that some parameters incur memory for every connection instead of only one. Consider, for example, the scenario in which the DBA wants to increase the size of the sort area. After some investigation, he concludes that having a 10M sort area would greatly improve performance because many sorts are taking place to disk. This system also experiences a high level of user activity—500 users. Instead of creating a single 10M sort area, the DBA has actually created 500 10M sort areas. The total memory cost is approximately 5G—more RAM than most systems have.

Don't forget to factor in user processes and other non-Oracle applications that might reside on the system. DBAs often think that they are the only people on a hardware platform. The Oracle Installation and Configuration Guide has charts that enable you to calculate memory requirements based on the products being used. It is far better to make adjustments before you create an instance. Otherwise, you must expend the time and frustration of tracking down SGA settings that artificially induce paging and swapping onto the system.

Consider the following guidelines when you adjust the SGA and its associated buffers:

  • Always make certain that the SGA fits comfortably in available memory, not overall system memory.

  • Never make buffers larger than they need to be. Allow, of course, for growth. However, if you are actively monitoring the system, you can increase the values as needed without wasting space in memory.

  • Watch out for database parameters that incur use for every user, such as SORT_AREA_SIZE. Don't set them so high that they cause the system to begin paging and swapping.

  • Adequately plan changes. Remember the proverb, ÒAn ounce of prevention is worth a pound of cure.Ó

  • Be wary of changes that affect other parameters. Changing the database block size, for example, can affect the values of several other INIT.ORA parameters and cause a database instance to consume more memory than expected.

Database instances require more disk space as they grow larger. The same is true with memory. A growing database will eventually outstrip the memory available on the system. Don't make the mistake of ignoring possible memory problems when you do a performance analysis.

Contention Issues

DBAs often ignore the physical aspects of a system. With all the logical structures that a DBA must deal with on a day-to-day basis, it is easy to forget about the physical elements that support them, such as SCSCI cards, bandwidth, or an I/O bus. Whenever you fail to consider the physical elements, contention can occur within the database.

Like spoiled children, database elements fight over resources. This is the most basic definition of contention. When contention happens, the database must wait for an event to occurs. This event—such as writing a block of data to a physical device or locking a row inside a database table—causes an appreciable slowdown in database performance. It is the responsibility of the DBA and others, such as the system administrator, to work with the database to minimize contention. When you minimize contention, the database performs at consistent, efficient speeds.

I/O Contention and Load Balancing

Contention among physical storage devices is the most common type of contention. Each disk drive has heads that travel back and forth across the magnetic medium (the disk) to read and write information. A database is made up of several physical data files, many of which reside on the same physical disk, so it is easy to see how contention can occur. If the database requests access to several data files on the same disk, the result is contention as the drive head moves across the disk to the first location and accesses the file, moves to the second location and accesses the file, and so on. Fortunately, you can minimize I/O contention.

It is important to understand types of database files and the types of operations performed on them. Figure 15.2 compares the types of files and operations.


Figure 15.2. File and access.

In a perfect world, you could place each database file on a separate disk. On smaller database instances, this might be even be possible, but they are the exception. In practice, most databases have multiple files and a limited number of disks on which to place them—generally just the amount of space that is needed. So it becomes important to work with the system administrator to determine the optimal layout of the physical database files.

As Figure 15.2 shows, each database file has specific operations. Redo logs, for example, handle straightforward, sequential output. Database files handle intensive read and write operations. You should put the following files on a physical disk separate from the other database files:

  • Rollback segments

  • Redo logs

  • Archive logs

The files are separated so that access to these areas is not in contention with access to other files, such as database files and the control file. It is important to optimize the physical layout so that contention is minimized between the ARCH, DBWR, and LGWR processes. Because of the information generated for each transaction, it is usually best to place rollback segments on their own disk.


Some sites that use heavy OLTP create separate tablespaces for each rollback segment and spread them across disks similarly to how control files are distributed. This minimizes the I/O on a single disk and provides for less I/O contention when rollback segments are accessed.

One of the most important things that you can do to achieve I/O balancing is to put table and index data on separate physical devices, as shown in Figure 15.3. If table and index data exist on the same disk drive, any type of table access that uses indexes doubles the I/O operations on a single disk device. Take, for example, a SELECT operation. The index must be accessed to determine the fastest way to access table information, and then the table itself is accessed. This causes all the operations to wait on the access to the table or the index, and it drastically cuts throughput on all operations, especially those with many users who access data in the same tables simultaneously. By splitting the tables and the indexes across disk drives, the disk drive heads can work in tandem with the database to quickly access table data and return it to users.


Figure 15.3. Table and index splitting.

Splitting tables and indexes is the first step in setting up efficient throughput and minimizing I/O contention, but it is hardly enough to ensure optimal performance. You must pinpoint which database files are accessed most heavily and spread them across disks to balance the load. By issuing the following query as the SYS user or as another user who has access to the V$ views, you can determine the current I/O load on the database files. It is important to take these readings several times over a span of time to ensure accurate statistics. For example,

SQL> select d.name, f.phyrds, f.phywrts

  2  from v$datafile d, v$filestat f

  3  where d.file# = f.file#

  4  /

NAME                                         PHYRDS    PHYWRTS

---------------------------------------- ---------- ----------

/u04/oradata/norm/system01.dbf               383336      23257

/u20/oradata/norm/rbs01.dbf                   13740     332604

/u05/oradata/norm/temp01.dbf                   3037     147963

/u08/oradata/norm/tools01.dbf                  5338        243

/u05/oradata/norm/users01.dbf                     0          0

/u03/oradata/norm/aold01.dbf                 133879      63698

/u06/oradata/norm/aolx01.dbf                  59108      91757

/u06/oradata/norm/apd01.dbf                   68733       8119

/u09/oradata/norm/apx01.dbf                   34358      29941

/u06/oradata/norm/ard01.dbf                  107335      21018

/u09/oradata/norm/arx01.dbf                   28967      13770

Unfortunately, it is difficult to know what the load on the disks will be before the database is implemented. For this reason, once you determine that a significant degree of contention is occurring on a single disk, move the database file. Use the alter database rename command on a mounted—but not started—database instance. That way, you can ensure that the load on the disk drives is optimal.

There are also situations in which key database files take the brunt of the I/O. Moving them to another disk might not be possible or might not provide the best solution. For situations like these, Oracle provides striping—taking a single, large database file and splitting it into smaller pieces that can be distributed across multiple disks. For example:

SVRMGR> create tablespace dba_ts

     2>   datafile '/u03/oradata/norm/dbats01.dbf' size 50M,

     3>            '/u05/oradata/norm/dbats02.dbf' size 50M,

     4>            '/u07/oradata/norm/dbats03.dbf' size 50M,

     5>            '/u09/oradata/norm/dbats04.dbf' size 50M

     6>   /

Statement processed.

When you distribute a tablespace across several database files on several disks, you stripe it—there is a 50M stripe of data on each disk. Striping enables the database to distribute its data across the disks, and it speeds I/O access by minimizing contention against disk drives.

Rollback Segment Contention

One of the features of an Oracle7 database is the ability to undo, or rollback, uncommitted changes to the database. In short, a transaction that physically changes database data—INSERT, UPDATE, or DELETE SQL statements—produces information that Oracle writes to its online rollback segments. Many DBAs fail to realize that because Oracle attempts to provide data consistency when a query is issued, SELECT statements use rollback segments when they access data. When a query is issued, if a row has been changed but not committed, the Oracle RDBMS returns information from rollback segments to provide read consistency. Rollback segments are also used when an instance is forced down or ended with an abnormal termination.

Rollback segment contention can occur whenever a transaction accesses a block within a rollback segment that another rollback segment needs. Use the following query to determine the amount of contention being experienced within the rollback segments.

select r.name, s.gets, s.waits

from v$rollstat s, v$rollname r

where s.usn = r.usn

/

The following ratio compares how often a rollback segment was accessed with how often the database waited to access information with a rollback segment:

ratio = ( waits/gets ) * 100

If the result is 2 or greater, there is contention within the rollback segments. Create more rollback segments, which reduces the chance that transactions hit the same rollback segment blocks at the same time. This reduces contention, but it cannot eliminate it entirely. Here are some guidelines for the number of rollback segments that you should use:

  • If the number of concurrent transactions is less than 16, use four rollback segments.

  • If the number of concurrent transactions is less than 32 but at least 16, use eight rollback segments.

  • If the number of concurrent transactions is 32 or more, use one rollback segment for every four transaction, up to a maximum of 50.

There is also the question of how large to make rollback segments. This is less complicated because you need to consider only two environments: OLTP and non-OLTP. OLTP environments (On-Line Transaction Processing) are those in which a large volume of database transactions are being processed by users, such as with an order entry system. The OLTP environments do better with a large number of smaller rollback segments. For non-OLTP environments, assign larger rollback segments so that data is retained longer for long transactions and long queries. It is acceptable to mix large and short rollback segments and to select explicitly which rollback segments to use.


When you use the SET TRANSACTION USE ROLLBACK SEGMENT statement, the rollback segment specified is used only for the life of the transaction. The lifetime of a transaction lasts until a commit or a rollback is issued, at which time the rollback segment is randomly selected. If you need to use the same rollback segment after a commit or rollback transaction, you must execute the SET TRANSACTION USE ROLLBACK SEGMENT statement again.

Like tables, rollback segments are constrained by the maximum extent size to which they can grow and by the amount of physical space available in a tablespace. Once these limit are reached, the database does not use a new rollback segment. Therefore, if a rollback segment or its tablespace is sized incorrectly, it is possible that the amount of rollback space needed will exceed the total size of the rollback segment.

Redo Log Contention

There is a buffer cache area in the SGA for redo information. This information is stored in memory and regulated through the use of two latches, or RAM-level locks. The redo allocation latch controls the allocation of space for writing redo information to the buffer. The redo copy latch is used to copy information to the buffer.

The wait latch requests wait to make a request, sleep, and then make the request again until it acquires the latch. Conversely, the immediate latch requests do not wait; instead, they continue processing. Use the following query to determine the status of both types of latches:

select name, gets, misses, sleeps, immediate_gets, immediate_misses

from v$latch

where name in ('redo allocation', 'redo copy')

/

Information about wait requests appears on the left, immediate requests on the right. After you execute this query—as SYS or another user with access to the V$ views—calculate the contention values:

immediate contention = ( immediate_misses / (immediate_gets + immediate_misses) ) * 100

wait contention = ( misses / (gets + misses) ) * 100

If the either value is greater than 1, contention is occurring for that latch. To alleviate contention for a redo allocation latch, reduce the amount of time that a latch is held by a process by lowering the value of the LOG_SMALL_ENTRY_MAX_SIZE parameter in the INIT.ORA parameter file. To alleviate contention for a redo copy latch, increase the number latches by raising the value of the LOG_SIMULTANEOUS_COPIES parameter.

Checkpoints

A checkpoint is an event that occurs within the database whenever information is written from the caches within the SGA to disk. This occurs periodically to bring the database and control files in sync with the SGA. Disk I/O slows down processing time, which holds true for checkpoints. The database must synchronize the contents of its memory and data files, but too frequent synchronizing can reduce overall performance.

Checkpoints generally occur at various intervals that the DBA can control, as a result of certain events that the DBA cannot control, or when the DBA forces them.

Checkpoints occur based on one of two intervals: quantity or time. The value of the LOG_CHECKPOINT_INTERVAL parameter in the INIT.ORA parameter files specifies the number of redo blocks that the database will fill. When this value is reached, a checkpoint occurs. Likewise, when the amount of time specified in the LOG_CHECKPOINT_TIMEOUT parameter since the last checkpoint elapses, a checkpoint occurs. These values should usually be set to minimize checkpoints. In other words, set LOG_CHECKPOINT_INTERVAL to a value greater than the size of the largest redo log, and set LOG_CHECKPOINT_TIMEOUT to zero, which disables it.

Checkpoints continue to occur whenever the database is shut down (normal or immediate) or when a redo log switch occurs. The only thing that that you can do to tune at this level is to make larger redo logs so that redo log switches occur less frequently.

To force a checkpoint, issue the following SQL command:

alter system switch logfile;

You might have to force a log switch when you perform maintenance on redo logs, such as when you relocate them from one physical disk to another.

Database Objects

Performance tuning does not stop with checking buffer caches and adjusting parameters in the INIT.ORA file. You also need to optimize the database's objects. This includes monitoring the objects for changes in condition, such as fragmentation, that can adversely impact performance. Unlike memory and contention problems, which generally remain stable unless a change in the database occurs, many database objects must be tuned on a regular basis.

Tables and Indexes

The database objects that cause the most problems are tables and indexes. Because transactions are constantly extracted from and inserted into database tables and indexes, problems such as chaining, migration, dynamic extension, and fragmentation can occur regularly. Because they occur often, most DBAs wait until these problems exceed a threshold, or they follow a maintenance schedule.

Migrated and Chained Rows

After the Oracle RDBMS places information in a row, it remains in that row—and utilizes its allocated space—until a change occurs. For example, an UPDATE that causes the row not to fit in a single database block might occur. The RDBMS searches for a free block in which it will fit. If it locates one, it moves the row to the new database block. The row is said to have been migrated. On the other hand, if a single database row is too large to fit in a single database block, the Oracle RDBMS stores the pieces of the row across several database blocks. This is called chaining.

Migrated and chained rows reduce performance for input and output operations. This is because the data spans multiple blocks. Instead of being able to return a single row in a single I/O operation, the database must perform multiple reads to return one row. Depending on the number of rows being returned and the number of rows that are chained or migrated, this can double or even triple the number of reads in a database operation.

Oracle provides a tool for detecting chaining and migration within the database. The SQL command analyze—used in the Cost-Based Optimizer—searches for chained rows. Before you run this query, however, you must run the utlchain.sql script provided with the database. The analyze command looks for a table called chained_rows, in which it stores the information returned. The query cannot run unless chained_rows exists. The utlchain.sql script creates this table:

SQL> @$ORACLE_HOME/rdbms/admin/utlchain

SQL> analyze table table_name list chained rows;

You must perform this operation on every table that you want checked for chaining or migration. If any rows appear in the chained_rows table, you should remove them. To remove chained or migrated rows,

  1. Create a copy of the table in which chained or migrated rows occur, and place the chained or migrated rows into the new table.

    create table new_table as
    select * from table
    where rowid in
    ( select head_rowid
    from chained_rows
    where table_name = 'TABLE' );

  2. Delete the chained or migrated rows from the original table.

    delete from table
    where rowid in
    ( select head_rowid
    from chained_rows
    where table_name = 'TABLE');

  3. Transfer the rows in the new table back to the original table.

    insert into table
    select * from new_table;

  4. Drop the new table.

    drop table new_table;

You should now analyze the table again. Rows that remain are chained rows; rows that were removed are migrated rows. To remove the chained rows, recreate the table with a higher pctfree value. The steps are

  1. Perform an export on the database table.

    exp file=/tmp/filename compress=y indexes=y grants=y constraints=y tables=sample userid=user/password

  2. Drop the current database table with the drop table command.

  3. Recreate the database table with a higher pctfree value.

    create table sample
    ( first_column VARCHAR2(10),
    second_column VARCHAR2(10),
    . . .
    )
    storage (initial 1024K next 1024K minextents 1
    maxextents 249 pctfree 90);

  4. Import the data into the recreated table with the imp utility.

    imp file=/tmp/filename full=y ignore=y userid=name/password


Always be sure and take a database backup before performing operations that require you to make changes to database objects.

Analyze the table again. If chained rows still exist, they might be impossible to eliminate without recreating the database with a new database block size. It is sometimes impossible to eliminate all chaining from a database, especially in databases that store information in LONG or RAW column types.

Dynamic Extension

Whenever you create a table, you must decide how large it should be, how fast it should grow, and how often its data will change. Unfortunately, the only way to gauge a table's growth is to rely on experience and trends. For that reason, you must deal with dynamic extension.

Every database object is created with an initial size. Information is added to the table or index until there is no more room left in the initial space allocation. Then the size of the table is incremented by a fixed amount. This is called dynamic extension.


Increases in table or index size through dynamic extension do not usually occur in contiguous segments of storage.

Allocation is based on the arguments passed in the storage clause of the create table or create index SQL commands. If no storage clause is specified, the default storage parameters defined in the tablespace definition are used. Consider the following statement:

storage (initial x next y minextents a maxextents b pctincrease m)

The arguments control how large each extension is and the size that the object is capable of extending. To determine the initial size of the database object, multiply the size of initial extent by minextents. When the amount of data in the table and the index exceeds the initial allocation, another extent—of size next—is allocated. This process continues until the amount of free space in the tablespace is exceeded or until the number of extents is exceeded.

Dynamic extension causes problems with database performance, for recursive calls are generated because of requests from the data dictionary that are not currently in cache. Use the following query to determine whether excessive dynamic extension is occurring:

select owner, segment_name, sum(extents)

from dba_segments

where segment_type in ('TABLE', 'INDEX')

group by owner, segment_name

order by owner, segment_name

/

Monitor the extents closely to ensure that the number of extents is not too close to the value set in maxextents. It is necessary to recreate the table periodically with a single extent. The steps are

  1. Export the database table with the exp utility. Don't forget to export indexes and constraints.

    exp file=/tmp/filename compress=y indexes=y grants=y constraints=y tables=sample userid=user/password

  2. Drop the database table with the drop table command.

  3. Recreate the table with an initial extent that is large enough to hold the entire table.

    create table sample
    ( first_column VARCHAR2(10),
    second_column VARCHAR2(10),
    . . .
    )
    storage (initial 50M next 1024K minextents 1
    maxextents 249);

  4. Import the data with the imp utility:

    imp file=/tmp/filename full=y ignore=y userid=name/password

Indexes are much simpler. The steps are

  1. Drop the index with the drop index command.

  2. Recreate the index with the create index command. Make sure that the initial extent is large enough to accommodate the entire index.

If you do not resize tables and indexes periodically—and correctly— a table can Òmax out,Ó meaning it has extended to the size dictated by the maxextents storage parameter. To fix it, issue the following SQL command:

alter table table_name (storage maxextents extent_size);

The maximum extent for a database object is determined by the block size and the operating system. Consult the Oracle Installation and Configuration Guide to determine what limits are imposed. Not knowing the maximum extension of a database object and not adequately monitoring database objects as they approach this size can effectively shut down the database for production users.

Fragmentation

Fragmentation of the tablespaces on which the database objects reside also reduces performance. Tablespaces are initially allocated as contiguous units of storage. Likewise, database objects are created within the tablespaces as contiguous units of storage. As objects extend these blocks, however, they are generally not contiguous with the previous blocks of data.

As tables are created, dropped, and extended, the number of contiguous blocks of free space can increase. For example, a tablespace might have 1M of free space—but all in 1K blocks. If you issue a create table command with an initial extent of 50K, it fails because it cannot allocate a contiguous amount of data in which to create the table. This is an especially common scenario in environments in which tables or indexes are frequently added and dropped.

To check the amount of free space available and the level of fragmentation on a tablespace, issue the following query:

select tablespace_name, sum(bytes), max(bytes), count(tablespace_name)

from dba_free_space

group by tablespace_name

order by tablespace_name

/

The results of this query tell how much free space is available within a tablespace (sum), what the size of the largest contiguous extent size is (max), and how many extents of free space make up the tablespace (count). If the number of contiguous blocks is greater than 10 to 15, you should defragment the tablespace. The steps are

  1. Export data from all the tables in the tablespace with the exp utility. For indexes, capture the SQL required to recreate them.

  2. Drop all the objects in the tablespace with the drop table and/or drop index commands.

  3. Drop the tablespace with the drop tablespace command.

  4. Recreate the tablespace with the create tablespace command.

  5. Import the data back into the tablespace with the imp utility. The import recreates the database tables. Manually recreate indexes by using their SQL scripts.

Just as chained and migrated rows reduce database performance, fragmentation reduces performance by causing the disk drive head to move excessively when it queries a database table. Obviously, fragmented tablespaces should be defragmented whenever possible. To minimize fragmentation, create and drop new tables and indexes (especially those used as temporary or development tables) only on restricted tablespaces.

Views

Views are SQL statements that are treated as virtual tables. This enables you to hide the details of complex table joins and filters so that the code does not have to be used in every statement that performs a similar operation. It is important, however, to keep in mind that the statement is not issued until a SQL statement is executed against the view.

The best performance tuning that can be done on a view is preventative in nature. Run each view that you create through an EXPLAIN PLAN, and analyze it for performance. Except in rare circumstances, views that are inefficient and take a long time to return data should not be used. If a view that previously performed acceptably suddenly begins to act sluggish, you should perform another EXPLAIN PLAN or execute SQL*Trace against a query on the view.

Usually, views fail to perform as expected when changes are made—such as adding or removing indexes—or when the query is not properly optimized for a large amount of data.

Triggers

Another new feature of Oracle7 that presents a tuning challenge is database triggers. If you have worked with SQL*Forms/Oracle*Forms or other event-driven processing, you are familiar with triggers. If you have not, they can be difficult to understand.

A trigger occurs when a certain event happens—such as before or after a database table is modified—at which time a section of PL/SQL code is executed. If the SQL code contained within the PL/SQL segment is tuned—based on an EXPLAIN PLAN—triggers work well. Triggers can cause unexpected problems, however. This is generally the case when they are used by an inexperienced developer or have not been tested adequately.

A common problem with triggers is an infinite loop. One trigger activates another trigger, which activates another trigger, and so forth, until one of the triggers causes a change that sets off the original trigger—starting the process again. These errors are difficult to find and can create phantom problems. Adequate research and testing before implementing new triggers goes a long way toward heading off trigger problems.

Database Locking

Database locking is important to the DBA, because locks can slow a database. This is a frustrating performance problem to locate because it is often not obvious.

Locks within the database prevent database users in a multi-user environment from changing the same data simultaneously. Database locks ensure the integrity of the data within a database by enforcing concurrency and consistency. Concurrency means that the database ensures that users can read data from a database block without worrying whether the data is currently being written to the database block; a user writing data must wait for the write operations that precede it to complete. Consistency means that a database query returns data exactly as it appeared when the query was originally executed; changes made after the query was issued are not returned.

Types and Classes of Locks

An Oracle7 database has two types of locks: data dictionary locks (DDL) and data manipulation locks (DML). A DDL ensures that the layout of a database object—its definition—does not change while it is used within a database query. A DML protects data that multiple users are trying to access simultaneously.

All transactions fall into one of two categories: exclusive or shared. Exclusive transactions do not enable other users to access the data. Shared transactions enable data to be shared with other users, although they cannot change it. Locks are released whenever a commit or a rollback occurs.

Whenever a SQL statement accesses data within a table, a DDL is acquired on the table. The lock prevents the DBA from making changes to a table while it is in use.

DML locks, on the other, are employed against database tables. The five types of DML locks are

RS

Locks a specific row in a database table in shared mode, enabling other database queries to access the information—for example, a SELECT. . .FOR UPDATE OF. . . operation

RX

Locks a specific row in a database table in exclusive mode, restricting access to the row to the database session that acquired the lock—for example, an UPDATE operation

S

Locks a table in shared mode and prohibits activities other than queries against the table—for example, a LOCK TABLE. . .IN SHARE MODE operation

SRX

Locks a table in shared mode and provide row-level locks as required to modify and update data—for example, a LOCK TABLE. . .IN SHARE ROW EXCLUSIVE MODE operation

X

Locks an entire table, preventing access to the table by any session except the current one—for example, a LOCK TABLE[e]]IN EXCLUSIVE MODE operation

Unresolved Locking Issues

A common database locking situation is unresolved locking, also called a deadlock. In a deadlock, two database operations wait for each another to release a lock.

Oracle7 is designed to detect deadlocks, but it is not always successful. You might encounter transactions that have acquired locks and are waiting on each another to free their locks so that they can proceed. Unfortunately, the only sure way to resolve this problem is to detect them as they occur and to deal with them individually. Oracle recommends two ways to avoid deadlocks:

  • Applications should acquire locks in the same order.

  • Always use the lowest necessary lock. For example, don't lock an entire table when only a row needs to be updated.

To resolve a deadlock, you must kill one of the processes—or both—at either the database or operating system level.

Checking Locking Situation

Oracle provides a utility script that checks the current lock state of the database. This script, utllockt.sql, provides a tree that shows what locks are held and what processes are waiting. It is

SQL> @$ORACLE_HOME/rdbms/admin/utllockt

You can perform a query on the DBA_WAITERS to determine which sessions are waiting on locks and the sessions that hold them. It does not show all the sessions holding locks—only the ones that cause wait states. This query enables you to view only the sessions that might cause locking problems:

select waiting_session, holding_session, lock_type, mode_held, mode_requested

from dba_waiters

/

Other views provide additional locking information. The information that each one shows is

DBA_BLOCKERS

Sessions that have another session waiting on a lock and are not in a wait status themselves

DBA_DDL_LOCKS

DDL locks held and requested within the database

DBA_DML_LOCKS

DML locks held and requested within the database

DBA_LOCKS

All locks held or requested within the database

DBA_WAITERS

Sessions that are waiting for database locks and what session is currently holding the lock

V$ACCESS

Locked database objects and the sessions that are accessing them

V$LOCK

Database locks

V$SESSION_WAIT

Database sessions that are waiting

Unlike other performance tuning and optimizing operations, monitoring locks is usually reactive. Locks are not a problem until a deadlock or similar event occurs. Locking is generally stable and requires less DBA interaction than other performance tuning tasks.

Summary

Performance tuning is the art of balance raw statistics with intuition and experience to arrive at the best possible solution. Entire volumes of guides have been written on this topic.

In this chapter, you learned some of the fundamental concepts of performance tuning. You learned how to extract and analyze memory and disk space to resolve contention. You saw guidelines and scripts that you can use to check the performance of a database.

Oracle responds differently on each platform, and the examples presented in this chapter are configured for a UNIX environment. You must determine how much of this material applies to your own site.

Previous Page Main Page Next Page