Oracle Free Tutorial

Web based School

Previous Page Main Page Next Page


9

Import and Export

Import and Export are two complementary utilities supplied by Oracle. You use the two utilities together primarily to back up and restore data, move data to other Oracle databases, and migrate data from an older Oracle version to a newer version. The following lists some other uses of Import and Export:

  • Store data in operating system files for archiving

  • Store database object definitions

  • Selectively back up parts of a database

  • Move data from one Oracle user's schema to another

  • Move data from one hardware platform or operating system to another

  • Save space and increase performance by reducing fragmentation

The operation of the Import and Export utilities is quite straightforward. Export writes information about the tables or database objects and data from Oracle tables themselves, such as table creation, index creation, table grants, sizing information, and so on. Export then saves this information to named operating system files. The operating system files that Export creates are known as dump files. The dump files, which are in Oracle binary format, are only useful to the Import utility for the most part. You can name the dump file whatever your operating system allows; if you don't specify an output filename for Export to use, it typically defaults to EXPDAT.DMP.

You can then archive the output files produced by Export to disk or offline for safe keeping, or Import can use them to recreate the exported data for recovery or maintenance purposes.


Import reads only files created by Export.

Export

Sometimes you find that you desperately need something and realize that you should have never put yourself in the position of needing it in the first place. Take Import and Export, for example. If the world were perfect, you would never need to import data and therefore never need to export as well. I am certain that I would still be in hot water with a few CEOs someplace if I did not export as often as I do. I find the Export utility to be the easiest method for saving myself great pain later.

Export is a comprehensive utility supplied by Oracle. It is fairly easy to use with quite a bit of flexibility built into the utility through the use of a large parameter list. The parameter options enable Export to provide solutions to complex data management problems. On the surface, Export appears to be a simple utility, but it has the power to become an efficient backup utility for Oracle and enables the database administrator (DBA) to perform important maintenance functions for Oracle databases. Export can write out operating system files that you can move to a different operating system or a different version of Oracle.

The following code shows an example of using Export:

exp userid=system/manager OWNER=scott... [other options]

The following code shows an example of using Export with a parameter file:

exp userid=system/manager parfile=filename

Using a Parameter File

You can use a parameter file with both Export and Import. The parameter file can help with imports and exports by providing consistency and simplicity. Using a parameter file in a nightly export is a great idea. Parameter files ensure consistent exports so you can make sure you actually export all the tables that need to be exported. You can start both the command-line and the parameter methods from a shell script, but the command-line method might be too long for your operating system to accept. The export_ts script writes the parameter file that it needs. Remember that you cannot use the PARFILE parameter in the parameter file. Table 9.1 illustrates all of the parameter options.

The following code contains a sample parameter file:

USERid=/

LOG=../archive/export_logs/scott_export

BUFFER=64000

COMPRESS=Y

FILE=../archive/export_data/export_ts.dmp

TABLES=

SCOTT.BONUS,

SCOTT.CUSTOMER,

SCOTT.DEPT,

SCOTT.EMP,

SCOTT.EXECUTABLES,

SCOTT.FALL5,

SCOTT.ITEM,

SCOTT.ORD,

SCOTT.PRICE,

SCOTT.PRODUCT,

SCOTT.QUOTA,

SCOTT.SALES1,

SCOTT.SALES2,

SCOTT.SALGRADE
    Table 9.1. Parameter list for Export.
Keyword


Description


Default


USERID

Username/password of the account performing the export.

BUFFER

The size of the buffer used to fetch rows. If this is set to zero, only one row at a time is fetched. Oracle provides a good rule of thumb for estimating the buffer size:
buffer = rows_in_array * max_row_size. If your table has long data types, Export must fetch one row at a time.

FILE

Output filename created by Export.

EXPDAT.DMP

GRANTS

Indicates whether or not to export object grants.

Y

INDEXES

Indicates whether or not to export index creation statements.

Y

ROWS

Indicates whether or not to export table data.

Y

CONSTRAINTS

Indicates whether or not to export table constraints.

Y

COMPRESS

Indicates if Export should rewrite the storage parameters of the tables and indexes in the export file.

Y

FULL

Indicates whether this is a full export. If FULL=Y, then use the INCTYPE parameter to specify if the export is COMPLETE (default), CUMULATIVE, or INCREMENTAL.

N

OWNER

A list of owners who will have their objects exported.
OWNER=scott or OWNER= (scott,temp).

TABLES

A list of tables to export.

TABLES=emp or TABLES=(emp,dept).

RECORDLENGTH

The length of the file record in bytes. Used to set the record length when transferring the export file to a different operating system.

INCTYPE

Used when FULL=Y. Valid options are COMPLETE, CUMULATIVE, and INCREMENTAL.
If Export is used as a backup and recovery scheme, using the INCTYPE=CUMULATIVE or INCREMENTAL parameters can provide a more efficient solution for exporting because only the changed tables are exported.

RECORD

Indicates whether to record an incremental export in system tables.

Y

HELP

Shows the help listing.

N

PARFILE

Specifies a filename to read containing the parameter options.

LOG

Specifies a filename to write log messages.

CONSISTENT

Specifies if tables being exported need to be read consistent.
This might be necessary if you need to export related tables while in use. This is a painful option to use. Oracle needs to use a rollback segment to save the changed rows, so use this parameter during quiet times on the database to minimize impact.

N

STATISTICS

Used to collect statistics on the tables and indexes during the import. The options are ESTIMATE, COMPUTE, and NONE. I recommend using caution here. If you have an application running and tuned well in rule base optimizer, import a table with STATISTICS=COMPUTE.

MLS

The MLS options are for Trusted Oracle only.

MLS_LABEL_FORMAT

The MLS options are for Trusted Oracle only.


You can omit the keywords up to PARFILE if specified in this order. Oracle recommends using only the USERID in this manner; it provides the other parameters only for backward compatibility.

Export works in one of three modes, depending on the user exporting the data and what options are chosen. The three modes of export are full database, user, and table. Although Oracle has devised these three modes of operation, they are not actually different modes but different levels of export. In exporting owners (OWNER=scott) and tables (TABLES=emp), for example, a user might want to export only certain tables. The only mode that stands out as a true mode is full database. Export is truly in a different mode if it is run with the parameter FULL=Y. Any TABLE or OWNER parameters conflict with the full mode.

All users can export in the user and table modes, and users with the EXP_FULL_DATABASE role can export in the full database mode as well.

You invoke Export's three modes by using the proper parameter options as listed here:

User mode

OWNER=ownerlist

Table mode

TABLES=tablelist

Full database mode

FULL=Y


Table Mode

Use the table mode to export a single table or a list of tables rather than an entire database. The default is to export all tables that belong to the user doing the export. Users that have access to other schema can export tables from the schema by qualifying with the schema name. The following code is an example of using table mode:

exp USERid=scott/tiger TABLES=dept

You can also export from another schema.

exp USERid=system/manager TABLES=scott.emp

or

exp USERid=system/manager TABLES=(scott.emp,scott.dept)

User Mode

You primarily use the user mode to export all tables and indexes for a particular user or a list of users. This mode works well when you create a user that owns all an application's objects. For example, if I have a user named sales that owns all the tables and indexes and other objects in the sales application, the application export might resemble the following code:

exp USERid=system/manager OWNER=sales

This gives me an export file of all the objects owned by sales, independent of the tablespace.

Full Database Mode

Full database mode exports all database objects except for objects that are usually created and maintained by SYS. Only users granted the EXP_FULL_DATABASE role can use this option.

Several other nice options are worthy of noting here. By default, Oracle performs a complete export if you specify full database mode (INCTYPE=COMPLETE). If you specify the option INCTYPE=INCREMENTAL, Oracle exports only the tables that contain any rows that have changed since the last full export of any type. If you specify INCTYPE=CUMULATIVE, Oracle exports only tables that contain any changed rows since the last complete or cumulative export.


The three modes are not very obvious, and you will find it easy to specify conflicting options. For example, specifying FULL=Y and OWNER=(scott,tom) causes the export to fail.

Command Line Versus Interactive

You can use Export in a limited interactive mode or in a command-line mode.

I usually use Export in the interactive mode when I am doing small projects that I am willing to run myself. Compressing the extents of a single table or moving a table from one tablespace to another are good candidates for the interactive method. Otherwise, I use the command-line method. You can obtain the same results either way for the most part. Use a parameter file when you need to export the same tables regularly or if the command line grows too long.

When Export starts, it prompts you for your USERID if it's not provided on the command line. Next, it prompts you for the buffer size, export filename, export mode, grants, table data, whether you want to compress, the users to export, and the tables to export.

A typical interactive export might resemble the following:

exp

Export: Release 7.1.4.1.0 - Production on Sun Oct  1 09:35:19 1995

  Oracle Corporation 1979, 1994.  .

Username: scott

Password:

Connected to: Oracle7 Server Release 7.1.4.1.0 - Production Release

With the distributed and parallel query options

PL/SQL Release 2.1.4.0.0 - Production

Enter array fetch buffer size: 4096 > 16384

Export file: ./expdat.dmp > scott.dmp

(2)U(sers), or (3)T(ables): (2)U > u

Export grants (yes/no): yes > y

Export table data (yes/no): yes > y

Compress extents (yes/no): yes > n

The equivalent command-line method resembles the following:

exp scott/tiger FILE=scott.dmp

Export responds with the following:

About to export SCOTT's objects ...

. exporting snapshots

. exporting snapshot logs

. exporting job queues

. exporting refresh groups and children

. exporting database links

. exporting sequence numbers

. exporting cluster definitions

. about to export SCOTT's tables ...

. exporting table                          BONUS        0 rows exported

. exporting table                       CUSTOMER        9 rows exported

. exporting table                           DEPT        4 rows exported

. exporting table                          DUMMY        1 rows exported

. exporting table                            EMP       14 rows exported

. exporting table                    EXECUTABLES        0 rows exported

. exporting table                          FALL5       30 rows exported

. exporting table                           ITEM       64 rows exported

. exporting table                            ORD       21 rows exported

. exporting table                          PRICE       17 rows exported

Export terminated successfully without warnings.

Export Parameters That Affect Performance

Export has several parameters that can affect performance during the export and also during the import. Understanding these parameters and how they impact the system resources, a user's ability to use the database, and your ability to successfully and quickly complete the export and import is quite important. Some of the parameters with the most impact on performance are discussed in this section.

COMPRESS

One of the most widely used parameters is the COMPRESS=Y option. Many people misunderstand what this does for them. Oracle writes out the export file, which contains table and index creation scripts. If COMPRESS=N, the current table storage clause remains intact, but if COMPRESS=Y, the storage clause of the table is altered to reflect one extent (whatever its size) upon importing. This is great in theory, and one extent is usually desirable.

Consider the following scenario. A 90 MB table has three equal extents of 30 MB each in a tablespace. The tablespace has three data files of 33 MB each for a total of 100 MB. Remember that an extent is a contiguous group of Oracle blocks. Because the extent must be contiguous, one extent can fit in each data file. If the table is exported using the parameter COMPRESS=Y and then imported, the import fails. The resulting import fails because extents cannot span data files, and the tablespace has three data files. The export rewrote the storage clause of the table to be one initial extent. The size of the initial extent is calculated as the sum of all existing extents at the time of the export, whether or not they contain data.

At this point, you have two options available. Recreate the tablespace with one data file of 100 MB or more, or create the table before loading the data using the original storage clause utilizing the three extents. If you precreate the table, specify the Import parameter IGNORE=Y; otherwise, Import will fail.

BUFFER

The buffer is the size of the chunk of memory claimed by Oracle to perform the export. Typically, the larger the chunk, the better (at least as far as Export is concerned). Export does not complain if you set BUFFER greater than the amount of memory that is actually available. As always, Oracle takes what it can get.

INDEXES

If you export with INDEXES=Y, you are get the capability to create the required indexes without much trouble during Import—but you pay for it with time. Instead of letting Import create the indexes for you, have the import file write out the index creation scripts so that you can run the script at your leisure. This enables you to focus on the data as it is importing. You might also find that during day-to-day operation, indexes are created in the most unlikely tablespaces. Creating the indexes after the import gives you time to remap their locations.

CONSISTENT

The CONSISTENT parameter can be a killer if you are not prepared. The idea is to export the consistent data together. The consistent data refers to tables that have references to other tables. If the tables are in use, you must export them together using the CONSISTENT parameter. It's best if this is only a small number of tables. The remainder of the tables are exported in a second export. During an export, Oracle reads tables one at time. Any changes that are applied to dependent tables might not be reflected in the export file. The CONSISTENT parameter is the only way to alleviate this. During the consistent export, Oracle accumulates all of the transactions on the tables and saves them to a rollback segment. Export then saves the transactions to the export file.

Using the CONSISTENT parameter can cause the rollback segment to grow very large if there are many transactions on the consistent tables. If you must use the CONSISTENT parameter, use it when the tables to be exported have minimal inserts and updates. Your export can fail because of a rollback segment that is too small or the dreaded "snapshot too old" message.

Probably the safest move is to test the export of your consistent tables to see what happens. I know this seems crude, but you will discover what your rollback requirements are for the future.

STATISTICS

Using the STATISTICS option does not slow the export, but it dramatically slows the import while it computes or estimates statistics. I don't recommend that you use the STATISTICS parameter on any sizable tables. It is almost always better to estimate or compute the statistics after the import is complete. Your time is better spent concentrating on the import rather than gathering the statistics.

LOG

The log file creates a record of what happened during the export. When you view the log file, you notice that data in the file looks just like what was echoed to the screen during the export. If you have problems, the log file logs exactly what object failed to export and provides an error message describing why. The log file can be of great help in finding exactly what the problem was and directing you towards a possible fix. If everything goes well during the export, you don't need the log file.

Exporting Tablespaces

As you probably noticed, you can export by owner and you can export individual tables, but there is no easy way to export entire tablespaces in a database. This is one of the most needed maintenance functions of the DBA as users or applications increase their space needs. A DBA should monitor tablespaces in the database and react proactively to prevent any problems related to sizing. Unfortunately, to export a tablespace, the DBA must do quite a bit of database research to find all the tables or table owners in a tablespace.

For UNIX users, I provided a nice shell script to do just what you need. I use this shell script myself and have found it to be a big help in my maintenance efforts. It is easy enough to use that I have our night operators perform the required exports for me. The shell script has a few requirements to work properly. In short, you must be able to access DBA_TABLES, and you must have the role EXP_FULL_DATABASE. Using the script is very easy.

Enter the following from the command line of UNIX:

export_ts tablespace name, tablespace name, tablespace name ...
How It Works

The script queries the database for all the tables in the tablespace to be exported and then proceeds to build a parameter file for Export to use. The script calls exp using the parameter file as one of the export parameter options (PARFILE=export_ts.par). The parameter file script created is named export_ts.par, and the actual export file is named export_ts.dmp. The script also creates a file it uses internally, export.spl. You usually do not need to be concerned with this file.

Currently, you must run the export_ts script only once for a single tablespace. You could easily modify the script to ask for the name of the export file, thus enabling you to make multiple exports before importing any data. I will leave that modification to you.

EXPORT_TS

#! /bin/sh

##########

#####

#          export_ts

#

# UNIX Shell script to export all Oracle tables and indexes from a tablespace.

#         Written by Kelly Leigh and Dave Kinchen

#

#          Usage: export_ts tablespace, tablespace, tablespace

#

#          Requirements:  must be able to access DBA_TABLES

#          and must be assigned the role EXP_FULL_DATABASE

#          developed on a DGUX platform

#####

# Setup filenames that we need to create the export file

# Check to see if there are any tablespaces listed

# if not then display the usage instructions

if [ "$*" = "" ]

  then

    echo "Description:"

    echo "      export_ts is a DBA utility to export all tables in"

    echo "      one or more tablespaces. export_ts accepts a single"

    echo "      (or list) of tablespace names separated by a space."

    echo "Usage:"

    echo "      export_ts (tablespace name) (tablespace_name) "

    echo "Example:"

    echo "      export_ts users"

    exit

fi

# Create list of tablespaces to run through export

echo "Creating a list of tables in tablespaces: $*"

TSLIST=""

for TS in $*

  do

    TSLIST="$TSLIST,'$TS'"

  done

TSLIST=`echo $TSLIST | cut -c2-`

TSLIST=`echo $TSLIST | tr '[a-z]' '[A-Z]'`

# Select table names from sqlplus and pipe them out to a listing file

# Send the output of the sqlplus statement to the bit bucket since

# the spool file will have it.

sqlplus -s / << EOF  > export_ts.spl

set pages 0;

set heading off;

set feed off;

SELECT owner || '.' || table_name

FROM dba_tables

WHERE tablespace_name IN (${TSLIST})

ORDER BY

  owner,

  table_name

/

EOF

# Echo the listing for the DBA to verify

echo "TABLE LISTING:"

more export_ts.spl

echo "Continue with the export? (Y/N) \c"

read CHOICE

CHOICE=`echo $CHOICE | tr '[a-z]' '[A-Z]'`

# If yes then first, add commas to the spool file

if [ "$CHOICE" = "Y" ]

  then

    echo "USERid=/" > $PFILE

    echo "FILE=export_ts.dmp" >> $PFILE

    echo "TABLES=" >> $PFILE

    cat export_ts.spl | sed -e "s/  *$//" -e '$!s/$/,/' >> $PFILE

    echo "\n\nStarting the export.  All export data will be written to export_ts

.dmp"

    exp parfile=export_ts.par

  else

    echo "Aborting export."

fi

Complete, Incremental, and Cumulative

The parameter INCTYPE tells Export to create one of three types of export files: complete, incremental, and cumulative. By using the incremental and cumulative exports, you can save space and time because only the changed tables are exported. The complete option exports the full database; it's the default for INCTYPE.

Complete

In order to use the complete parameter, the user must be able to do a full database export (FULL=Y). A full database export will export all the database objects so that database can be recreated from the ground up. The full database export will act as the starting point for a complete import. You must already have a complete export for the incremental and cumulative exports to be of any value because you import them after you import the complete.

Incremental

Incremental exports are where you can save time by only exporting the changed tables in the database. Most databases have tables that change slowly or never at all. If your database is fairly stable, you should consider using the incremental exports. These exports are usually faster than complete exports and consume less space. The export only has tables in it that have changed since the last incremental export, or if no other incremental exports exist, it exports tables that have changed since the last cumulative export. Lastly, Export looks for tables that have changed since the last complete export if a cumulative export does not exist. To use the incremental exports, you import them after the complete and cumulative exports are finished.

Cumulative

A cumulative export, in a sense, makes all of the incremental exports unnecessary. The cumulative export will export all the tables since the last complete export, including all the tables currently exported in the incremental exports. You do not need the incremental exports once a cumulative is completed.

A Backup Strategy

You can use Import and Export as a primary backup scheme for Oracle databases. It is simple and reliable, and you can move the export file to a different platform if necessary. Incremental exports can reduce the time needed for the nightly exports. Exporting using an incremental export could decrease a system's down time during the week and increase processing time and user satisfaction at the same time. Weekly cumulative exports provide for fairly quick recoveries. Monthly complete exports are sufficient on most databases. This scenario takes a full month to repeat itself, and you can also import your data files fairly quickly, if needed.


Obviously, you should not store the export files on any device that contains any Oracle data files, control files, and so on. If you were to store export files on any device that contains any Oracle data files or control files, a media failure on that device could very likely make your Oracle database unrecoverable!

Importing a database that you backed up using the incremental export is easy. You should be able to do this in your sleep. (Most DBAs work best that way, I know.) In order to recreate the database, you need the complete export along with all the cumulative exports and incremental exports since the last cumulative export. You begin the recovery by importing the complete export. Next, you import all the cumulative exports; lastly, import the incremental exports.

Import

The Import utility is the converse of the Export utility. On the surface, Import seems to be a simple utility supplied by Oracle—indeed, it functions that way—but it also has quite a few powerful features. It is responsible for reading the export files to recreate the database objects as well as whatever state they were exported in originally. Import can also convert between ASCII and EBCDIC to enable you to move data between platforms. I discuss the more common features later. Import can work interactively or from the command line. When used interactively, Import prompts the user for the necessary parameters to complete the import. It is often easier to supply the parameters on the command line or from a parameter file. Import, like Export, uses parameter files. Table 9.2 provides a full listing of the import options and their features.

Import Usage

The following code line is an example of using Import.

imp userid=scott/tiger file=expdat.dmp  [other options]
    Table 9.2. Parameter list for Import.
Keyword


Description 


Default


USERID

Username/password of the account performing the import.

BUFFER

The size of the buffer used to fetch rows. Oracle provides a good rule of thumb for estimating:
buffer = rows_in_array * max_row_size
Set this to a large value for faster imports. If you use the COMMIT parameter, the import commits every time the buffer fills.

FILE

The name of the Export file to import.

EXPDAT.DMP

SHOW

If show is set to Y, no import is actually performed. It just shows you what it would have done.

N

IGNORE

When you set IGNORE=Y, the Object already exists error is overlooked. Use IGNORE=Y when importing into precreated tables. If you specify IGNORE=N, the import for the object will fail if it exists in the database, and the import continues with the next object. Rows might be duplicated in a table if IGNORE=Y and if an import is attempted more than once without truncating the tables or dropping them first.

N

GRANTS

Indicates whether or not to import object grants.

Y

INDEXES

Indicates whether or not to create indexes.

Y

ROWS

Indicates whether or not to import table data.

Y

FULL

Indicates if this should be considered a full Import. Must have the role IMP_FULL_DATABASE in order to import an export file created with the FULL=Y parameter. Usually used in conjunction with the same export parameter. You can use the FULL option with a USER or TABLE import also, importing all objects in the export file.

N

FROMUSER

A list of users' objects to import. This ignores all objects that are not owned by owners in the FROMUSER list. If any objects are imported into a database in which the original owner does not exist, the objects are imported into the importer's schema.

TOUSER

This can be used to import objects from one owner to another. To use this parameter, you must have the role IMP_FULL_DATABASE.

TABLES

A list of the tables to import. Use an asterisk (*) to specify all tables. Use the TABLES parameter to perform a table mode import. Specify one table TABLES=emp, or if you want to specify several tables, use TABLES=(emp,dept).

RECORDLENGTH

The record length of the export file. Usually not used unless you are transferring the data to another operating system.

O/S dependent

INCTYPE

The type of Import being performed: COMPLETE, CUMULATIVE, and INCREMENTAL.

COMPLETE

COMMIT

Indicates that Import should commit after each array, as set by the BUFFER size. You will need large rollback segments if COMMIT=N.Conversely if COMMIT=Y, you need relatively small rollback segments. With COMMIT=Y you might only get a partial import of a table if something causes the import to fail. Import rolls back only to the last commit. If Import fails with COMMIT=Y and you have the table constraints disabled, you need to remember to truncate or drop the tables and start over to prevent duplicate rows.

N

HELP

Displays the help screen.

N

PARFILE

Indicates the name of a file to read import parameters. You can use all the parameters except PARFILE in the parameter file.

NONE

LOG

Indicates the name of a file to log all the screen information and also error messages.

DESTROY

Indicates if the CREATE TABLESPACE commands found in a full export only should be executed. Setting DESTROY=Y destroys the data files of the database being imported into.
I suppose that one could really hurt.

N

INDEXFILE

This is a nice option to use to write out all the CREATE and ALTER statements of the objects in the export file. All but the CREATE INDEX commands are commented out. You can import the data with INDEXFILE=filename and use the file created by this parameter as a basis for new table creation scripts. With a bit of editing, you can move them to a new tablespace and add more appropriate sizing and storage parameters.

MLS

For Trusted Oracle only, to indicate if MLS labels should be imported.

N

Like Export, Import works in one of three ways, depending on the options and the user importing the data. The three modes of Import are full database, user, and table. All users can import their own objects in the user and table modes, and users with the IMP_FULL_DATABASE role can import full database exports. Users without the IMP_FULL_DATABASE can still use the FULL=Y parameter, too. When you import a table or user mode export file, you can specify FULL=Y to import the entire file without prompting for any table names or user names. You might notice that the import modes are very similar to the export modes. The modes are named the same between Import and Export. For the user mode, the parameter OWNER in Export is replaced with FROMUSER in Import.

The three modes are invoked by using the proper parameter options as listed here:

User mode

FROMUSER=ownerlist

Table mode

TABLES=tablelist

Full database mode

FULL=Y

You can import in the three modes interactively. You invoke the three modes by responding to Import's prompts. To import the entire export file, answer yes to the prompt Import entire export file (yes/no). The default for this prompt is yes. For a USER mode import, answer no. Import then asks for a username. After you enter the username, Import prompts for a list of tables. If you do not enter any table names, all the user's tables are imported one at a time. To tell Import that you are done entering table names, enter a period (.) on a new line after specifying the last table.

The following code fragments illustrate the different modes. The first example is from user mode.

Import entire export file (yes/no): yes > n

Username: scott

The following is from table mode:

Enter table names. Null list means all tables for user

Enter table name or . if done: emp

Enter table name or . if done: dept

Enter table name or . if done: .

The last example is from full database mode.

Import entire export file (yes/no): yes >Y

User Mode

The user method of Import enables you to import tables that belong to a specific user. This can also be combined with the table mode to import specific tables from a specific schema. This is important because Import attempts to create the tables in the current schema's default tablespace if the schema specified in the export file does not exist. This can work to your advantage if you need to move tables from one schema to another.

Table Mode

You usually use table mode to import a table or a list of tables, rather than all tables in the export file. If a user has access to other schema, he can import tables from other schema by qualifying with the schema name. The default is to import all tables in the schema of the user doing the import.

Full Database Mode

The full database option of Import does not quite work the same way as the full database option of Export. If you specify the full database option in Export, the entire database is exported, and to import the entire file, you must have the role IMP_FULL_DATABASE assigned to you. If the export file is a user export or table export, the entire export file is imported (whatever is in it), and you do not need to have the role IMP_FULL_DATABASE assigned to you.

Interactive Versus Command Line

Both Import and Export can operate in command-line mode and in interactive mode. Both modes have their advantages, but Oracle recommends that you use the command-line mode with a parameter file. The two modes are outlined in this section. I'm sure you will find good reasons for using both in your own situation.

Interactive Method

The interactive method is an easy way to import data from export files, but you do not have all the options of the parameter or command-line method. For example, you cannot create an index file. The interactive prompts might change depending on the response to previous prompts. The interactive prompts also have the benefit of showing the prechosen defaults. To use Import in the interactive mode, type imp or imp USERid=userid/password at the command line.


The interactive method continues to exist only for backward compatibility.

An interactive import might resemble the following:

/usr/local/p3016dk >  imp scott/tiger

Import: Release 7.1.4.1.0 - Production on Sun Oct  1 10:12:54 1995

  Oracle Corporation 1979, 1994.  .

Connected to: Oracle7 Server Release 7.1.4.1.0 - Production Release

With the distributed and parallel query options

PL/SQL Release 2.1.4.0.0 - Production

Import file: ./expdat.dmp > scott.dmp

Enter insert buffer size (minimum is 4096) 30720>

Export file created by EXPORT:V07.01.04

List contents of import file only (yes/no): no >

Ignore create error due to object existence (yes/no): yes > n

Import grants (yes/no): yes > y

Import table data (yes/no): yes > y

Import entire export file (yes/no): yes > y
Command-Line Method

The same import can be accomplished by entering the following command-line options.

imp USERid=scott/tiger FILE=scott.dmp ignore=N

Import responds with the following:

Import: Release 7.1.4.1.0 - Production on Sun Oct  1 10:19:12 1995

  Oracle Corporation 1979, 1994.  .

Connected to: Oracle7 Server Release 7.1.4.1.0 - Production Release

With the distributed and parallel query options

PL/SQL Release 2.1.4.0.0 - Production

Export file created by EXPORT:V07.01.04

. importing SCOTT's objects into SCOTT

. . importing table "BONUS"                                    0 rows imported

. . importing table "CUSTOMER"                                 9 rows imported

. . importing table "DEPT"                                     4 rows imported

. . importing table "DUMMY"                                    1 rows imported

. . importing table "EMP"                                     14 rows imported

. . importing table "EXECUTABLES"                              0 rows imported

. . importing table "FALL5"                                   30 rows imported

. . importing table "ITEM"                                    64 rows imported

. . importing table "ORD"                                     21 rows imported

. . importing table "PRICE"                                   17 rows imported

Import terminated successfully without warnings.

Import Parameters That Affect Performance

As with Export, Import also has several parameters that affect performance. This section lists the parameters that affect performance the most along with an explanation of how they work and how to use them.

BUFFER

The BUFFER parameter for Import works the same way as the BUFFER parameter in Export. For the most part, you can specify a large value for best results.

COMMIT

The COMMIT parameter defaults to N, but if do not have large rollback segments to use (at least as large as the largest table), set this option to Y. If you set COMMIT to Y, Import commits every time the buffer fills and the array is inserted. The only significant performance gain you will notice is when your import fails because of a rollback issue and you must redo an import.

LOG

Quite simply, always use the LOG parameter so you can review the log file and effectively fix anything that went wrong.

How Objects Import

Objects are created in a specific order as the import progresses. The important thing to remember is that Import first creates the table and then loads the data for each table. Import creates all the indexes on the tables after it loads all the table data. Finally, Import enables all the table constraints and triggers. In some situations, the table constraints can cause interesting results (usually undesirable). If you are in the habit of precreating the tables before the import as I am, you know you'd better disable the constraints before importing. Reenable the constraints when the import is finished. Don't forget!

Tables

Import first creates the table definitions and then loads the data. Import creates all the indexes for the table and then creates and enables the constraints and database triggers. The important thing to note is that all the tables are loaded and all of the indexes are created a table at a time. When the tables are finished, Import creates and reenables all the table constraints.

Stored Procedures

Packages, functions, and stored procedures are imported without updating the time stamp. This enables Oracle to use the objects without recompiling them.

Snapshots

Snapshots are exported quite uneventfully. Oracle exports and imports the master table, the master table trigger, the snapshot log (if you are using one), and the snapshot itself, similar to exporting and importing tables and database triggers. If you are using a snapshot log, only the snapshot log definition is exported and imported. What this means to you is that the first fast refresh that is attempted on that snapshot will fail. Plan on a complete refresh of all of the imported snapshots after the import is complete.

Importing and Exporting with Personal Oracle7

My world has not been the same since Oracle introduced Personal Oracle7. I have been able to develop complete applications on my PC at home. Although I do not have a PC platform that compares with our Sequent, HP, or DG platforms, I can't resist the desire to develop applications on it. I haven't yet used Personal Oracle7 for any production applications, but I am sure many people have.

All of us who have had the pleasure of experiencing Personal Oracle7 will agree that the Windows front end on the Import and Export utilities is a nice addition to the old familiar utilities. I think that many Personal Oracle7 users like me are developing applications. I find myself exporting tables from the true development platforms at the job site and then importing them into my PC database so I can develop whenever and whatever I like.

Using the Personal Oracle7 version of Import and Export is quite a bit different from the old standards. Personal Oracle7 has a nice GUI interface that is hard to beat. It still works internally the same way as the character versions and outputs files that you can transfer to other Oracle instances. The main difference in Personal Oracle7 is the Windows front end. Figure 9.1 shows the Export utility in Personal Oracle7; the title bar has the caption Database Exporter. The first thing you notice is that parameters are represented by text boxes, checkboxes, or pull-down lists. Figure 9.2 shows the advanced options. The advanced options are Record Length (RECORDLENGTH), Buffer Size (BUFFER), Record Export in System Data (RECORD), Keep Components Consistent (CONSISTENT), Compress When Imported (COMPRESS), Log File (LOG) Increment Type (INCTYPE), and Statistics Type (STATISTICS). The other Export parameters follow similar translations.


Figure 9.1. Personal Oracle's Export utility.


Figure 9.2. Export utility advanced options.

Figure 9.3 and 9.4 show the Database Importer and the advanced options available. The Database Importer closely resembles the Exporter's look and feel. I do not go into detail because its functionality is fairly obvious.


Figure 9.3. The Import utility.


Figure 9.4. Import utility advanced options.

Import and Export Hints and Tips

You can use Import and Export for tasks other than just backing up data. I often use Import and Export for several other constructive reasons, and this section outlines a few of the best hints and tips.

Create an Index File

Import can create what is known as an index file. This is a SQL script of not only the creation scripts for the indexes in the export file but also creation scripts for the other database objects in the export file. All the lines in the file except for the index creation scripts are commented out. The commented-out code contains creation scripts for the remaining objects in the export file. With some editing, you can create a script that can be used to precreate all the tables and indexes and so on. This is often necessary if you use the parameter COMPRESS=Y.

Adjusting Storage Parameters

Using the parameter COMPRESS=Y rewrites the original storage parameters so that the table or index resides on one contiguous extent. Compressing the extents is a good idea as long as you actually have the contiguous extents available. I suggest precreating the tables and indexes with your edited INDEXFILE. This ensures that you can actually create the objects. Of course, if any table or index fails to create, you can adjust the initial and next extents in the INDEXFILE and attempt to create the table again. This is also a good time to adjust the PCTFREE and PCTUSED and the TABLESPACE parameters (or any of the other table parameters) in the table and index creation statements. I highly recommend that you study and adjust the size requirement if necessary at this time, especially if the storage parameters are set to their defaults. I don't go into great detail, but here are some good places to start if you have a mess of default storage parameters:

Parameter

Defaults to

Set to

PCTFREE

10%

5-20%

PCTUSED

40%

55-90%

PCTINCREASE

50%

0%

INITIAL

10240 or 2048

All data in one extent plus some growth if possible

NEXT

10240 or 20480

25% up to the size of INITIAL

There are other storage parameters, but these five are typically the most important to me.

Reorganizing Data

Users own tables, indexes, and other objects. The tables and indexes users create reside in tablespaces. If you were to export a user's tables, drop them, and then import them, they end up in the user's default tablespace again. If you need to move tables from one tablespace to another, you export the tables and change the user's default tablespace and quotas to another tablespace that you want to contain the tables. Then, when you reimport the tables, they are created in the new default tablespace.

Reducing Database Fragmentation

When a database has many small blocks of free space, you should export it with the FULL=Y parameter. Recreate the database and then do a full database import. Tablespaces also need similar help. You should plan carefully where and how many data files a tablespace has. To recreate the tablespace, export all the objects, using the nifty script I provided if you like. Drop the tablespace including its contents. Recreate the tablespace and then import the fresh new tablespace.

Migrating from Version 6 to Oracle7

Import can read export files from Version 6 Oracle. You could use this function to migrate by exporting the Version 6 database and then recreating the database in Oracle7. Some data type changes occurred after Version 6; the old Version 6 CHAR data type changed to VARCHAR2. Import handles this conversion for you. If you're wondering who is going to change all your table creation scripts from the old CHAR data type to VARCHAR2, try using the INDEXFILE parameter explained earlier. Oracle7 provides vastly expanded integrity constraints, and the SQL syntax is slightly different from Version 6. All the existing constraints are exported, but only the NOT NULL constraint is imported and enabled.

Importing Tables with LONG and LONG RAW Data Types

Did you know it is possible to export a table with extremely large columns (LONG data types can be up to 2 GB in length) only to discover that they cannot be imported? It hasn't happened to me, but Oracle reports that this could happen. The reason is that Export can break up the long data, but Import must read the data into contiguous memory to import. If you do not have enough memory, the import fails.

Summary

Import and Export not only provide data recovery, but they have quite a few database management uses as well. I am sure you will find yourself using Import and Export often to keep your database tuned well by reducing fragmentation and enabling yourself to change the storage parameters of tables and their locations or owners. Whatever task you choose for Import and Export, you will find them to be a stable pair of utilities that are easy to use.

Previous Page Main Page Next Page