Data movement utilities

Db2 provides various data movement utilities to transfer data between databases and external files and are as listed below:

 

The Db2 Export utility

The Db2 Import utility

The Db2 Load utility

The Db2 Ingest utility

The Db2 Export utility:

The Export utility extracts data from a Db2 database table or view and externalizes it to a file, using the DEL format or the PC/IXF format. Such files can then be used to load data into other databases or can be used by other software applications such as spreadsheets and word processors to analyze the exported data.

The basic syntax for the EXPORT command is as shown:

EXPORT TO [FileName] OF [DEL | WSF | IXF]

<LOBS TO [LOBPath]>

<LOBFILE [LOBFileName]>

<XML TO [XMLPath]>

<XMLFILE [XMLFileName]>

<MODIFIED BY [Modifiers]>

<METHOD N ([ColumnNames])>

<MESSAGES [MsgFileName]>

[SELECTStatement]

  

The Db2 Export utility

The Export utility extracts data from a Db2 database table or view and externalizes it to a file, using the DEL format or the PC/IXF format. Such files can then be used to load data into other databases or can be used by other software applications such as spreadsheets and word processors to analyze the exported data.

 The basic syntax for the EXPORT command is as shown:

 EXPORT TO [FileName] OF [DEL | WSF | IXF]

<LOBS TO [LOBPath]>

<LOBFILE [LOBFileName]>

<XML TO [XMLPath]>

<XMLFILE [XMLFileName]>

<MODIFIED BY [Modifiers]>

<METHOD N ([ColumnNames])>

<MESSAGES [MsgFileName]>

[SELECTStatement] 

Where,

 FileName: Identifies the name and location of the external file to which to export or copy data

LOBPath: Identifies one or more locations to store LOB data

LOBFileName: Identifies one or more base names to which LOB data values are to be written

XMLPath: Identifies one or more locations to store XML documents

XMLFileName: Identifies one or more base names to use to name the files to which XML documents are to be written

Modifiers: Identifies one or more options to use to override the default behavior of the Export utility

ColumnNames: Identifies one or more column names to write to the external file to which data is to be exported

MsgFileName: Identifies the name and location of an external file to which messages produced by the Export utility are to be written as the export operation is performed

SELECTStatement: Identifies a SELECT SQL statement that, when executed, will retrieve the data to copy to an external file

A simple EXPORT command is as shown here:

EXPORT TO employee.del OF DEL MESSAGES employee.msg SELECT * FROM employee;

If you want to export data stored in a table named EMP_PHOTO to a PC/IXF format and all large object values (LOB) stored in the PICTURE column are written to individual files, you can do so by executing the EXPORT command:

EXPORT TO /ds/data/EMP_PHOTO.ixf OF IXF

    LOBS TO /ds/lob

    LOBFILE e_picture

    MODIFIED BY lobsinfile

    MESSAGES /ds/msg/emp_photo.msg

    SELECT * FROM emp_photo;

     -- Output would look something like below

    Number of rows exported: 8

 The authorizations required to invoke the EXPORT command is data access administration authority (DATAACCESS) or CONTROL privilege on each participating table or SELECT privilege on each participating table.

The supported file formats are these: DEL, PC/IXF and Work Sheet File Format (WSF).

 The Db2 Import utility:

If you want to read data directly from an external file and insert it into a specific table or an updatable view, you can use the IMPORT utility. The external files can be in one of the file formats: DEL, ASC, PC/IXF, and WSF.

The basic syntax for IMPORT command is as shown here:

IMPORT FROM [FileName] OF [DEL | ASC | WSF | IXF]

    <LOBS FROM [LOBPath]>

    <XML FROM [XMLPath]>

    <MODIFIED BY [Modifiers]>

    <Method>

    <XML PARSE [STRIP | PRESERVE] WHITESPACE>

    <XMLVALIDATE USING [XDS | SCHEMA [SchemaID]]>

    <ALLOW NO ACCESS | ALLOW WRITE ACCESS>

    <COMMITCOUNT [CommitCount] | COMMITCOUNT AUTOMATIC>

    <RESTARTCOUNT | SKIPCOUNT [RestartCount]>

    <WARNINGCOUNT [WarningCount]>

    <NOTIMEOUT>

    <MESSAGES [MsgFileName]>

    [CREATE | INSERT | INSERT_UPDATE | REPLACE | REPLACE_CREATE]

        INTO [TableName] <([ColumnNames])>

        <IN [TSName] <INDEX IN [TSName]> <LONG IN [TSName]>>

Where:

 FileName: Identifies the name and location of the external file from which to import data

LOBPath: Identifies one or more LOB locations that are to be imported

XMLPath: Identifies one or more XML locations that are to be imported

Modifiers: Identifies one or more options to use to override the default behavior of the Import utility

Method: Identifies the method to use to read data values from an external file

SchemaID: Identifies the XML schema identifier against which to validate XML documents being imported

CommitCount: Identifies the number of rows to copy to the table/updatable view specified before a commit operation is performed

RestartCount: Identifies the number of rows of data to skip in the external file specified

WarningCount: Identifies the number of warnings to allow before the import operation is terminated

MsgFileName: Identifies the name and location of an external file to which to write messages produced by the Import utility as the import operation is performed

TableName: Identifies the name of the table or updatable view to which data is to be copied

ColumnNames: Identifies one or more specific columns to which to copy the data

TSName: Identifies the table space to store the table and its regular data, indexes, and long data/large object data if the table specified is to be created

A simple IMPORT command and a sample output looks like the following:

 

    IMPORT FROM EMP.ixf OF IXF

   -- LOBS FROM /data

    MODIFIED BY lobsinfile

    MESSAGES /data/import.msg

    INSERT INTO abc.emp;

   

    -- Output would look something like below

   

    Number of rows read         = 8

    Number of rows skipped      = 0

    Number of rows inserted     = 8

    Number of rows updated      = 0

    Number of rows rejected     = 0

    Number of rows committed    = 8

There are five data copy control options available with the IMPORT command and these control how the target table data to be copied. The options are as follows:

 

CREATE: The target table is created along with all of its associated indexes, and data is then imported into the new table. This option can also control the table space for the new table. However, this option can only be used when importing data from PC/IXF formatted files. The PC/IXF file formats will store the table and associated index definitions along with actual data.

INSERT: Data is inserted into the target table, and the table must be made available before the INSERT operation.

INSERT_UPDATE: Data is either inserted into the target table if not already present or if the row exists the data is updated (applicable only if the row being imported has a primary key value that matches that of an existing record).

REPLACE: Any existing data is deleted from the target table and then the new data is inserted. For this option to work, the table must be already exist in the database.

REPLACE_CREATE: Any existing data is deleted from the target table if it already exists, and then the new data is inserted. However, if the target table does not exist, it will be created (only if the file format is PC/IXF) along with all the associated indexes. Data is then imported into the new table.

The authorizations required to invoke the IMPORT command based on the data copy control options are as stated below:

 

IMPORT with INSERT: DATAACCESS authority or CONTROL privilege on each participating table, view, or nickname, or INSERT and SELECT privilege on each participating table or view

IMPORT with INSERT_UPDATE: DATAACCESS authority or CONTROL privilege on each participating table, view, or nickname, or INSERT, SELECT, UPDATE, and DELETE privilege on each participating table or view

IMPORT with REPLACE_CREATE: DATAACCESS authority or CONTROL privilege on each participating table, or view or INSERT, SELECT, and DELETE privilege on each participating table or view. If table does not exist, then DBADM and CREATETAB authority to create a new table, and copy data into it.

The supported file formats are these: DEL, ASC, PC/IXF and WSF

 

The Load Utility:

The Load utility also copies data from an external file of various file formats such as DEL, ASC, PC/IXF and CURSOR to a database.

 

Differences between Import and Load utilities

Import utility supports data copy into tables, views, and nicknames, whereas Load utility supports data copy only into tables.

Triggers and constraints are activated when using Import utility, whereas it is not activated when using the Load utility.

Both Import and Load utilities support XML and LOB data types.

The basic syntax for LOAD command is as shown:

 

    LOAD <CLIENT> FROM [FileName OF [DEL | ASC | IXF] |

        PipeName | Device | CursorName OF CURSOR]

    <LOBS FROM [LOBPath]>

    <MODIFIED BY [Modifiers]>

    <Method>

    <SAVECOUNT [SaveCount]>

    <ROWCOUNT [RowCount]>

    <WARNINGCOUNT [WarningCount]>

    <MESSAGES [MsgFileName]>

    <TEMPFILES PATH [TempFilesPath]>

    [INSERT | REPLACE | RESTART | TERMINATE]

    INTO [TableName] < ([ColumnNames])>

    <FOR EXCEPTION [ExTableName]>

    <STATISTICS [NO | USE PROFILE]>

    <NONRECOVERABLE | COPY YES TO [CopyLocation or TSM]>

    <WITHOUT PROMPTING>

    <DATA BUFFER [Size]>

    <INDEXING MODE [AUTOSELECT | REBUILD | INCREMENTAL |

        DEFERRED]>

    <ALLOW NO ACCESS | ALLOW READ ACCESS <USE [TmpTSName]>>

    <SET INTEGRITY PENDING CASCADE [IMMEDIATE | DEFERRED]>

Let's explore some of the terms from the following code:

 

FileName: Identifies the name and location of one or more external files from which to copy data

PipeName: Identifies the name of one or more named pipes in Unix from which to copy data

Device: Identifies the name of one or more devices from which to copy data

CursorName: Identifies the name of one or more cursors from which to copy data

LOBPath: Identifies one or more locations to store LOB data to be loaded

Modifiers: Identifies one or more options to use to override the default behavior of the Load utility

Method: Identifies the method to use to extract data values from an external files specified, and map them to one or more columns of the target table

SaveCount: Identifies the number of rows to copy to the target table before the Load utility will establish a new consistency point

RowCount: Identifies the number of rows to load from an external files into a table

WarningCount: Identifies the number of warning conditions the Load utility should ignore before terminating the load operation

MsgFileName: Identifies the name and location of an external file to which to write messages that the Load utility produces as the load operation is performed

TempFilesPath: Identifies the location to store temporary files that the Load utility might need

TableName: Identifies the name to assign to the table into which to load data (this cannot be the name of a system catalog table)

ColumnName: Identifies one or more specific columns (by name) into which to load data

ExTableName: Identifies the name to assign to the table to which to copy all rows that violate unique index or primary key constraints defined for the target table specified

CopyLocation: Identifies the directory or device that is to store a backup copy of all data loaded into the target table

Size: Identifies the number of 4 KB pages to use as buffered storage space for transferring data within the Load utility

TmpTSName: Identifies the system temporary table space in which to build shadow copies of indexes before they are copied to the appropriate regular table space for final storage during the Index Copy phase of a load operation

A simple LOAD command and a sample output look like the following:

 

    LOAD FROM /ds/data/EMP_PHOTO.ixf OF IXF

    LOBS FROM /ds/lob

    MODIFIED BY lobsinfile

    MESSAGES /ds/log/load.msg

    REPLACE INTO emp_photo

    STATISTICS USE PROFILE;

   

    -- Output would look something like below

    SQL3501W  The table space(s) in which the table resides will not be placed in backup pending state since forward recovery is disabled for the database.

    SQL3519W  Begin Load Consistency Point. Input record count = "0".

    SQL3520W  Load Consistency Point was successful.

    SQL3110N  The utility has completed processing.  "8" rows were read from the input file.

    SQL3519W  Begin Load Consistency Point. Input record count = "8".

    SQL3520W  Load Consistency Point was successful.

    SQL3515W  The utility has finished the "LOAD" phase at time "05/21/2018 08:36:54.443136".

    SQL3107W  At least one warning message was encountered during LOAD processing.

     Number of rows read         = 8

    Number of rows skipped      = 0

    Number of rows loaded       = 8

    Number of rows rejected     = 0

    Number of rows deleted      = 0

    Number of rows committed    = 8

 

The Load utility supports writing any row that violates a unique index or a primary key index into an exception table, however, this exception table must exist before running the Load utility. The required exception table can be created using the following CREATE TABLE statement:

     CREATE TABLE emp_photo_exception LIKE emp_photo IN TS_DATA;

You can even add two optional columns to record the violated row, insert timestamp, and the reason for the violation using the ALTER TABLE statement.

     ALTER TABLE emp_photo_exception

    ADD COLUMN rec_ts TIMESTAMP

    ADD COLUMN rec_log_msg CLOB (32K);

Basic syntax to use the exception table during the Load utility is as follows:

 

    LOAD FROM /ds/data/EMP_PHOTO.ixf OF IXF

    LOBS FROM /ds/lob

    MODIFIED BY lobsinfile

    MESSAGES /ds/log/load.msg

    REPLACE INTO emp_photo

    FOR EXCEPTION emp_photo_exception

    STATISTICS USE PROFILE;

There are four data-copy control options available with the LOAD command, and these control how the target table data to be copied. The options are as follows:

 

INSERT: Data is inserted into an existing target table. A LOAD INSERT operation into a column-organized table maintains table statistics by default.

REPLACE: Any existing data is deleted from the target table, and then the new data is loaded. As in LOAD INSERT operation, a LOAD REPLACE operation into a column-organized table maintains table statistics by default.

If you specify KEEPDICTIONARY, an existing compression dictionary is preserved across the LOAD REPLACE operation. Similarly, if you specify RESETDICTIONARY, the LOAD REPLACE operation rebuilds a new dictionary for the table data object.

RESTART: Restarts an interrupted Load operation. The Load operation automatically continues from the last consistency point in the Load, Build, or Delete phase. This option is not supported for column-organized tables and any RANDOM distribution tables.

TERMINATE: Terminates a previously interrupted load operation. And if the previous load operation was with INSERT, it rolls the operation back to the point in time at which it started. If the previous load operation was with REPLACE, data in the target table is truncated.

Comments

Popular posts from this blog

Db2 export command example using file format (del , ixf)

How to fix DB2 Tablespace OFFLINE state issue?

How to determine fenced User-ID of a DB2 instance