ACE Director Alum Daniel Morgan, founder of Morgan's Library, is scheduling
complimentary technical Workshops on Database Security for the first 30
Oracle Database customers located anywhere in North America, EMEA, LATAM, or
APAC that send an email to
asra_us@oracle.com. Request a Workshop for
your organization today.
DataPump comes in two different forms ... the executable EXPDP and IMPDP in the $ORACLE_HOME/bin directory, this page, and as a built-in package DBMS_DATAPUMP linked at page bottom.
Data Dictionary Objects
AMGT$DATAPUMP
DBMS_DATAPUMP_INT
SQL$TEXT_DATAPUMP_TBL
CDB_DATAPUMP_JOBS
DBMS_DATAPUMP_UTL
SQL$_DATAPUMP
CDB_DATAPUMP_SESSIONS
DBMS_STREAMS_DATAPUMP
SQL$_DATAPUMP_TBL
DATAPUMP_DDL_TRANSFORM_PARAMS
DBMS_STREAMS_DATAPUMP_UTIL
SQLOBJ$AUXDATA_DATAPUMP
DATAPUMP_DIR_OBJS
GV$DATAPUMP_JOB
SQLOBJ$AUXDATA_DATAPUMP_TBL
DATAPUMP_OBJECT_CONNECT
GV$DATAPUMP_SESSION
SQLOBJ$DATA_DATAPUMP
DATAPUMP_PATHMAP
GV_$DATAPUMP_JOB
SQLOBJ$DATA_DATAPUMP_TBL
DATAPUMP_PATHS
GV_$DATAPUMP_SESSION
SQLOBJ$PLAN_DATAPUMP
DATAPUMP_PATHS_VERSION
KU$_DATAPUMP_MASTER_10_1
SQLOBJ$PLAN_DATAPUMP_TBL
DATAPUMP_REMAP_OBJECTS
KU$_DATAPUMP_MASTER_11_1
SQLOBJ$_DATAPUMP
DATAPUMP_TABLE_DATA
KU$_DATAPUMP_MASTER_11_1_0_7
SQLOBJ$_DATAPUMP_TBL
DATA_PUMP_DIR
KU$_DATAPUMP_MASTER_11_2
TSDP$DATAPUMP
DATA_PUMP_XPL_TABLE$
KU$_DATAPUMP_MASTER_12_0
USER_DATAPUMP_JOBS
DBA_DATAPUMP_JOBS
KU$_DATAPUMP_MASTER_12_2
V$DATAPUMP_JOB
DBA_DATAPUMP_SESSIONS
OLS$DATAPUMP
V$DATAPUMP_SESSION
DBA_DV_DATAPUMP_AUTH
ORACLE_DATAPUMP
V_$DATAPUMP_JOB
DBMS_DATAPUMP
PSTDY_DATAPUMP_SUPPORT
V_$DATAPUMP_SESSION
DBMS_DATAPUMP_DV_LIB
SQL$TEXT_DATAPUMP
Data Move Methods
Method
Description
Conventional Path
When there are conflicting table attributes, Oracle Data Pump uses conventional path to move data.
Data File Copy
The fastest method of moving data is to copy the database data files to the target database without interpreting or altering the data.
Direct Path
After data file copying, direct path is the fastest method of moving data. In this method,
the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation.
External Table
If a data file copying method is ont selected, and the data cannot be moved using direct path this may be a viable option.
Network Link
When the Import NETWORK_LINK parameter is used to specify a network link for an import operation, the direct path method is used by default. Review supported database link types.
File Move Methods
Method
Description
Transportable Tablespaces
The TRANSPORT_TABLESPACES parameter is used to specify a transportable tablespace export. Only metadata for the specified tablespaces is exported.
Transportable = ALWAYS
The TRANSPORTABLE=ALWAYS parameter is supplied on a table mode export (specified with the TABLES parameter) or a full mode export (specified with the FULL parameter)
or a full mode network import (specified with the FULL and NETWORK_LINK parameters).
Export Modes
Mode
Description
Full
Use the FULL parameter: Exports the entire database is unloaded. EXP_FULL_DATABASE role required.
Schema
Use the SCHEMAS parameter: The default export mode.
If you have the EXP_FULL_DATABASE role, then you can specify a list of schemas and optionally include the schema definitions themselves, as well as system privilege grants to those schemas.
If you do not have the EXP_FULL_DATABASE role, you can export only your own schema.
Table
Use the
TABLES parameter. A specified set of tables, partitions, and their dependent objects are unloaded. The user must have the EXP_FULL_DATABASE role to specify tables that are not in your own schema. All specified tables must reside in a single schema.
Note that type definitions for columns are not exported in table mode. It is expected that the type definitions already exist in the target instance at import time.
Tablespace
Use the TABLESPACES parameter. Only tables contained in a specified set of tablespaces are unloaded.
If a table is unloaded, its data, metadata, and dependent objects are also unloaded. In tablespace mode, if any part of a table resides in the specified set, then that table and all of its dependent objects are exported.
Privileged users get all tables. Nonprivileged users get only the tables in their own schemas.
Transportable Tablespace
Use the TRANSPORT_TABLESPACES parameter.
In transportable tablespace mode, only the metadata for tables, and dependent objects, within a specified set of tablespaces are unloaded. EXP_FULL_DATABASE role required. Degree of parallelism must = 1.
Legacy
Data Pump enters legacy mode once it determines a parameter unique to original Export is present, either on the command line or in a script.
As Data Pump processes the parameter, the analogous Data Pump Export parameter is displayed.
Import Modes
Mode
Description
Full
A full import is specified using the FULL parameter. In full import mode, the entire content of the source (dump file set or another database) is loaded into the target database.
This is the default for file-based imports. You must have the DATAPUMP_IMP_FULL_DATABASE role if the source is another database.
Cross-schema references are not imported for non-privileged users. For example, a trigger defined on a table within the importing user's schema, but residing in another user's schema, is not imported.
Schema
Schema import is specified using the SCHEMAS parameter. In a schema import, only objects owned by the specified schemas are loaded.
The source can be a full, table, tablespace, or schema-mode export dump file set or another database. you have the DATAPUMP_IMP_FULL_DATABASE role,
then a list of schemas can be specified and the schemas themselves (including system privilege grants) are created in the database in addition to the objects contained within those schemas.
Cross-schema references are not imported for non-privileged users unless the other schema is remapped to the current schema.
For example, a trigger defined on a table within the importing user's schema, but residing in another user's schema, is not imported.
Table
Table-mode import is specified using the TABLES parameter. In table mode, only the specified set of tables, partitions, and their dependent objects are loaded.
The source can be a full, schema, tablespace, or table-mode export dump file set or another database. You must have the
DATAPUMP_IMP_FULL_DATABASE role to specify tables that are not in your own schema.
Use the transportable option during a table-mode import by specifying the TRANPORTABLE=ALWAYS
parameter with the TABLES parameter. This requires use of the NETWORK_LINK parameter, as well.
Tablespace
Tablespace-mode import is specified using the TABLESPACES parameter.
In tablespace mode, all objects contained within the specified set of tablespaces are loaded, along with the dependent objects.
The source can be a full, schema, tablespace, or table-mode export dump file set or another database. For unprivileged users, objects not remapped to the current schema will not be processed.
Transportable Tablespace
Transportable tablespace import is specified using the TRANSPORT_TABLESPACES parameter.
In transportable tablespace mode, the metadata from a transportable tablespace export dump file set or from another database is loaded.
The datafiles, specified by the TRANSPORT_DATAFILES parameter, must be made available from the source system for use in the target database, typically by copying them over to the target system.
Encrypted columns are not supported in transportable tablespace mode. This mode requires the DATAPUMP_IMP_FULL_DATABASE role.
In transportable tablespace mode, only the metadata for tables, and dependent objects, within a specified set of tablespaces are unloaded. EXP_FULL_DATABASE role required.Degree of parallelism must = 1.
Legacy
Data Pump enters legacy mode once it determines a parameter unique to original Import is present, either on the command line or in a script.
As Data Pump processes the parameter, the analogous Data Pump Import parameter is displayed.
Exporting Schemas
Demo Setup
conn / as sysdba
desc dba_directories
col owner format a10
col directory_path format a70
SELECT * FROM dba_directories;
CREATE OR REPLACE DIRECTORY data_pump_dir AS 'c:\temp';
-- default is $ORACLE_BASE/admin/<database_name>/dpdump
GRANT export full database TO uwclass;
Note: Verify that the environment variables ORACLE_HOME and ORACLE_SID are set properly in your shell. If they are not set then you must set them at the command line to proceed.
Basic Export Types
Note: After each export, SELECT table_name FROM user_tables in the schema that ran the export, in the following examples, uwclass or abc.
You will find tables with names such as SYS_EXPORT_FULL_01 and SYS_EXPORT_SCHEMA_01. Examine their contents
ALTER TABLESPACE users READ WRITE;
ALTER TABLESPACE example READ WRITE;
Legacy Mode Mappings
Legacy Export Cmd
How Handled
BUFFER
This parameter is ignored because Data Pump does not make use of conventional mode.
COMPRESS
In original Export, the COMPRESS parameter affected how the initial extent was managed. Setting COMPRESS=n caused original Export to use current storage parameters for the initial and next extent.
The Data Pump COMPRESSION parameter is used to specify how data is compressed in the dump file, and is not related to the original Export COMPRESS parameter.
CONSISTENT
Data Pump Export determines the current time and uses FLASHBACK_TIME. Always use this option.
CONSTRAINTS
If original Export used CONSTRAINTS=n, then Data Pump Export uses EXCLUDE=CONSTRAINTS.
The default behavior is to include constraints as part of the export.
DIRECT
This parameter is ignored. Data Pump automatically chooses the best export method.
FEEDBACK
The Data Pump STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the export job, as well as the rows being processed.
In original Export, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump, the status is given every so many seconds, as specified by STATUS.
FILE
Data Pump attempts to determine the path that was specified or defaulted to for the FILE parameter,
and also to determine whether a directory object exists to which the schema has read and write access.
GRANTS
If original Export used GRANTS=n, then Data Pump uses EXCLUDE=GRANT.
If original Export used GRANTS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump default behavior.
INDEXES
If original Export used INDEXES=n, then Data Pump uses the EXCLUDE=INDEX parameter.
If original Export used INDEXES=y, then Data Pump uses the INCLUDE=INDEX parameter.
LOG
Data Pump attempts to determine the path that was specified or defaulted to for the LOG parameter,
and also to determine whether a directory object exists to which the schema has read and write access.
OBJECT_CONSISTENT
This parameter is ignored because Data Pump processing ensures that each object is in a consistent state when being exported.
OWNER
The Data Pump SCHEMAS parameter is used.
RECORDLENGTH
This parameter is ignored because Data Pump automatically takes care of buffer sizing
RESUMABLE
This parameter is ignored because Data Pump automatically provides this functionality
RESUMABLE_NAME
This parameter is ignored because Data Pump automatically provides this functionality
RESUMABLE_TIMEOUT
This parameter is ignored because Data Pump automatically provides this functionality.
ROWS
If original Export used ROWS=y, then Data Pump Export uses the CONTENT=ALL parameter.
If original Export used ROWS=n, then Data Pump Export uses the CONTENT=METADATA_ONLY parameter.
STATISTICS
This parameter is ignored because statistics are always saved for tables as part of a Data Pump operation.
TABLESPACES
If original Export also specified TRANSPORT_TABLESPACE=n, then Data Pump ignores the TABLESPACES parameter.
If original Export also specified TRANSPORT_TABLESPACE=y, then Data Pump takes the names listed for the TABLESPACES parameter and uses them.
TRANSPORT_TABLESPACE
If original Export used TRANSPORT_TABLESPACE=n (the default), then Data Pump uses the TABLESPACES parameter.
If original Export used TRANSPORT_TABLESPACE=y, then Data Pump uses the TRANSPORT_TABLESPACES parameter and only the metadata is exported.
TRIGGERS
If original Export used TRIGGERS=n, then Data Pump Export uses the EXCLUDE=TRIGGER parameter
TTS_FULL_CHECK
If original Export used TTS_FULL_CHECK=y, then Data Pump uses the TRANSPORT_FULL_CHECK parameter. If original Export used TTS_FULL_CHECK=y, then the parameter is ignored.
VOLSIZE
It means the location specified for the dump file is a tape device. The Data Pump Export dump file format does not support tape devices. Therefore, this operation terminates with an error.
Default: The default mode depends on which other encryption-related parameters are used. If only the ENCRYPTION parameter is specified, then the default mode is TRANSPARENT.
If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is open, then the default is DUAL. If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is closed, then the default is PASSWORD.
-- exclude all (nonreferential) constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading
expdp uwclass/uwclass@pdbdev SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo16.dmp EXCLUDE=constraint
-- exclude object grants on all object types and system priv grants
expdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo18.dmp EXCLUDE=grant
-- excludes the definitions of users
expdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo19.dmp EXCLUDE=user
-- to exclude a specific user and all objects of that user, specify a filter such as the following
-- (where hr is the schema name of the user you want to exclude):
expdp uwclass/uwclass@pdbdev FULL=yes DIRECTORY=data_pump_dir DUMPFILE=demo21.dmp EXCLUDE=SCHEMA:\"='HR'\"
ALTER TABLESPACE users READ WRITE;
ALTER TABLESPACE example READ WRITE;
Legacy Mode Mappings
Legacy Export Cmd
How Handled
BUFFER
This parameter is ignored because Data Pump does not make use of conventional mode.
CHARSET
This parameter is desupported and will cause the Data Pump Import operation to abort.
COMMIT
This parameter is ignored. Data Pump Import automatically performs a commit after each table is processed.
COMPILE
This parameter is ignored. Data Pump Import compiles procedures after they are created.
CONSTRAINTS
If original Import used CONSTRAINTS=n, then Data Pump Import uses the EXCLUDE=CONSTRAINT parameter.
If original Import used CONSTRAINTS=y, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
DATAFILES
The Data Pump Import TRANSPORT_DATAFILES parameter is used.
DESTROY
If original Import used DESTROY=y, then Data Pump Import uses the REUSE_DATAFILES=yes parameter.
If original Import used DESTROY=n, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
FEEDBACK
The Data Pump Import STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the import job, as well as the rows being processed. In original Import,
feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump Import, the status is given every so many seconds, as specified by STATUS.
FILE
Data Pump attempts to determine the path that was specified or defaulted to for the FILE parameter,
and also to determine whether a directory object exists to which the schema has read and write access.
FILESIZE
This parameter is ignored because the information is already contained in the Data Pump dump file set.
FROMUSER
The Data Pump SCHEMAS parameter is used. If FROMUSER was used without TOUSER also being used,
then import schemas that have the IMP_FULL_DATABASE role cause Data Pump Import to attempt to create the schema and then import that schema's objects.
Import schemas that do not have the IMP_FULL_DATABASE role can only import their own schema from the dump file set.
GRANTS
If original Import used GRANTS=n, then Data Pump uses the EXCLUDE=OBJECT_GRANT parameter.
If original Import used GRANTS=y, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
IGNORE
If original Import used IGNORE=y, then Data Pump Import uses the TABLE_EXISTS_ACTION=APPEND parameter.
This causes the processing of table data to continue. If original Import used IGNORE=n, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
INDEXES
If original Import used INDEXES=n, then Data Pump uses the EXCLUDE=INDEX parameter.
If original Import used INDEXES=y, the parameter is ignored and does not need to be remapped because that is the Data Pump default behavior.
INDEXFILE
fThe Data Pump Import SQLFILE={directory-object:}filename and INCLUDE=INDEX parameters are used.
The same method and attempts made when looking for a directory object described for the FILE parameter also take place for the INDEXFILE parameter.
If no directory object was specified on the original Import, then Data Pump uses the directory object specified with the DIRECTORY parameter.
LOG
Data Pump attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access.
RECORDLENGTH
This parameter is ignored because Data Pump automatically takes care of buffer sizing
RESUMABLE
This parameter is ignored because Data Pump automatically provides this functionality
RESUMABLE_NAME
This parameter is ignored because Data Pump automatically provides this functionality
RESUMABLE_TIMEOUT
This parameter is ignored because Data Pump automatically provides this functionality.
ROWS=N
If original Import used ROWS=n, then Data Pump uses the CONTENT=METADATA_ONLY parameter. If original Import used ROWS=y, then Data Pump uses the CONTENT=ALL parameter.
SHOW
If SHOW=y is specified, the Data Pump Import SQLFILE=[directory_object:]file_name parameter is used to write the DDL for the import operation to a file.
Only the DDL (not the entire contents of the dump file) is written to the specified file. (Note that the output is not shown on the screen as it was in original Import.)
STATISTICS
This parameter is ignored because statistics are always saved for tables as part of a Data Pump operation.
STREAMS_CONFIGURATION
This parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified.
STREAMS_INSTANTIATION
This parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified.
TABLESPACES
If original Import also specified TRANSPORT_TABLESPACE=n (the default), then Data Pump ignores the TABLESPACES parameter.
If original Import also specified TRANSPORT_TABLESPACE=y, then Data Pump takes the names supplied for this TABLESPACES parameter and applies them to the Data Pump TRANSPORT_TABLESPACES parameter.
TOID_NOVALIDATE
This parameter is ignored. OIDs are no longer used for type validation.
TO_USER
The REMAP_SCHEMA parameter is used. There may be more objects imported than with original Import. Data Pump may create the target schema if it does not already exist.
TRANSPORT_TABLESPACE
If original Import used TRANSPORT_TABLESPACE=n, then Data Pump Import ignores this parameter.
TTS_OWNERS
This parameter is ignored because this information is automatically stored in the Data Pump dump file set.
VOLSIZE
When the VOLSIZE parameter is used, it means the location specified for the dump file is a tape device. The Data Pump dump file format does not support tape devices. The operation terminates with an error.
Default: The default mode depends on which other encryption-related parameters are used. If only the ENCRYPTION parameter is specified, then the default mode is TRANSPARENT.
If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is open, then the default is DUAL. If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is closed, then the default is PASSWORD.
-- exclude all (nonreferential) constraints, except for NOT NULL constraints and any constraints
-- required for successful table creation and loading
impdp uwclass/uwclass@pdbdev SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo15.dmp EXCLUDE=constraint
-- exclude object grants on all object types and system priv grants
impdp uwclass/uwclass@pdbdev SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo17.dmp EXCLUDE=grant
-- excludes the definitions of users
impdp uwclass/uwclass@pdbdev SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo18.dmp EXCLUDE=user
-- to exclude a specific user and all objects of that user, specify a filter such as the following
-- (where hr is the schema name of the user you want to exclude):
impdp uwclass/uwclass@pdbdev FULL=yes DIRECTORY=data_pump_dir DUMPFILE=demo20.dmp EXCLUDE=SCHEMA:\"='HR'\"
FLASHBACK_SCN import
FLASHBACK_SCN=<scn_value>
conn / as sysdba
SELECT dbms_flashback.get_system_change_number FROM dual;
CREATE TABLE holder (
holder_name VARCHAR2(30) NOT NULL,
card_number NUMBER(16) NOT NULL);
CREATE TABLE activity (
vendor VARCHAR2(30) NOT NULL,
card_number NUMBER(16) NOT NULL,
sales_total NUMBER(10,2) NOT NULL);
INSERT INTO holder VALUES ('Morgan', 4124555566661234);
INSERT INTO holder VALUES ('Kyte', 3776555566665678);
INSERT INTO holder VALUES ('Norgaard', 5058555566669012);
INSERT INTO activity VALUES ('Amazon.com', 4124555566661234, 100.00);
INSERT INTO activity VALUES ('eBay.com', 4124555566661234, 59.50);
INSERT INTO activity VALUES ('Oracle', 3776555566665678, 50.50);
INSERT INTO activity VALUES ('Miracle', 5058555566669012, 42.42);
CREATE OR REPLACE PACKAGE hidedata AUTHID CURRENT_USER IS
FUNCTION newcc(oldno IN NUMBER) RETURN NUMBER;
END hidedata;
/
CREATE OR REPLACE PACKAGE BODY hidedata IS
TYPE cc_list IS TABLE OF NUMBER INDEX BY VARCHAR2(16);
cc_remaps cc_list;
cc_seed NUMBER(15) := 000000000000010;
FUNCTION newcc(oldno IN NUMBER) RETURN NUMBER IS
BEGIN
IF NOT cc_remaps.EXISTS(oldno) THEN
cc_seed := cc_seed+10;
cc_remaps(oldno) := ROUND(oldno, -15)+cc_seed+MOD(oldno,10);
END IF;
RETURN cc_remaps(oldno);
END newcc;
END hidedata;
/
col card_number format 9999999999999999
SELECT vendor, hidedata.newcc(card_number) CARD_NUMBER, sales_total
FROM activity;