Tuesday, February 3, 2009

Expdp fails with ORA-31693, ORA-06502, ORA-31605

Problem Description
$expdp parfile=pfile_maxim_history_sel.par directory=d

Export: Release 10.2.0.1.0 - 64bit Production on Monday, 05 January, 2009 17:23:52

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Username: system
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=pfile_maxim_history_sel.par directory=d
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 964 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/AUDIT_OBJ
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MAXIM"."HISTORY" failed to load/unload and is being skipped due to error:
ORA-06502: PL/SQL: numeric or value error
ORA-31605: the following was returned from LpxXSLSetTextVar in routine kuxslSetParam:
LPX-314: an internal failure occurred
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/db1/oracle/dump/expdat.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 17:24:11

My parameter file was below,
$ cat pfile_maxim_history_sel.par
tables=maxim.history
query=maxim.history:'"where cdate between '01-JAN-09' and '03-JAN-09'"'

Cause of the Problem
In the data pump export operation ORA-31693, ORA-06502, ORA-31605, LPX-314 happened due to incorrect usage of single or double quotes for the QUERY parameter. So in the query parameter within the parameter files quote is given incorrect.
We see there is single quote(') surrounding double quote(") in the query parameter.

Solution of the Problem
Just omit the single quote (') from the query parameter in the expdp.
$ cat pfile_maximsg_history_sel.par
tables=maxim.history
query=maxim.history:"where cdate between '01-JAN-09' and '03-JAN-09'"

And now run your datapump export operation.

Improving Index creation speed in Oracle

It is sometimes a time consuming task if you like to create index with much number of rows. For example you are asked to created an index over 1 billion of data. It may take over 6 hours on your computer and you want to make it faster.

With providing several options while creating index you can improve index creation speed dramatically.

1)PARALLEL Option:
While creating index oracle must first collect the symbolic key/ROWID pairs with a full-table scan. With Parallel option supplied to the create index script oracle will scan full table parallel based on the number of CPUs, table partitioning and disk configuration and thus do the tasks faster.

On a server that have 6 CPUs you may give parallel 5 as below.
create index table_1_I on table_1(id,code) parallel 5;

2)NOLOGGING Option: With the NOLOGGING option provided while creating index you can restrict database to generate a large redo log. NOLOGGING option generates minimal redo. So you will achieve higher performance.
create index table_1_I on table_1(id,code) parallel 5 nologging;

3)COMPRESS Option: With the COMPRESS option you will enable key compression, which eliminates repeated occurrence of key column values and may substantially reduce storage. Use integer to specify the prefix length (number of prefix columns to compress).
For unique indexes, the valid range of prefix length values is from 1 to the number of key columns minus 1. The default prefix length is the number of key columns minus 1.

For nonunique indexes, the valid range of prefix length values is from 1 to the number of key columns. The default prefix length is the number of key columns.

You can use as,
create index table_1_I on table_1(id,code) parallel 5 nologging compress;

4)Index in a bigger block: You can create an index in a tablespace that uses bigger block size. If you have DSS environment then you can do it. This will improve performance while creating index.

You can do it by first creating a tablespace with 32k blocksize and then create index under it,

create tablespace 32k_ts
datafile '/u01/32k_file.dbf'
blocksize 32k;


create index table_1_I on table_1(id,code) parallel 5 nologging compress tablespace 32K;

Expdp fails with ORA-31693, ORA-06502, ORA-31605

Problem Description
$expdp parfile=pfile_maxim_history_sel.par directory=d

Export: Release 10.2.0.1.0 - 64bit Production on Monday, 05 January, 2009 17:23:52

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Username: system
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=pfile_maxim_history_sel.par directory=d
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 964 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/AUDIT_OBJ
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MAXIM"."HISTORY" failed to load/unload and is being skipped due to error:
ORA-06502: PL/SQL: numeric or value error
ORA-31605: the following was returned from LpxXSLSetTextVar in routine kuxslSetParam:
LPX-314: an internal failure occurred
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/db1/oracle/dump/expdat.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 17:24:11

My parameter file was below,
$ cat pfile_maxim_history_sel.par
tables=maxim.history
query=maxim.history:'"where cdate between '01-JAN-09' and '03-JAN-09'"'

Cause of the Problem
In the data pump export operation ORA-31693, ORA-06502, ORA-31605, LPX-314 happened due to incorrect usage of single or double quotes for the QUERY parameter. So in the query parameter within the parameter files quote is given incorrect.
We see there is single quote(') surrounding double quote(") in the query parameter.

Solution of the Problem
Just omit the single quote (') from the query parameter in the expdp.
$ cat pfile_maximsg_history_sel.par
tables=maxim.history
query=maxim.history:"where cdate between '01-JAN-09' and '03-JAN-09'"

And now run your datapump export operation.

Improving Index creation speed in Oracle

It is sometimes a time consuming task if you like to create index with much number of rows. For example you are asked to created an index over 1 billion of data. It may take over 6 hours on your computer and you want to make it faster.

With providing several options while creating index you can improve index creation speed dramatically.

1)PARALLEL Option:
While creating index oracle must first collect the symbolic key/ROWID pairs with a full-table scan. With Parallel option supplied to the create index script oracle will scan full table parallel based on the number of CPUs, table partitioning and disk configuration and thus do the tasks faster.

On a server that have 6 CPUs you may give parallel 5 as below.
create index table_1_I on table_1(id,code) parallel 5;

2)NOLOGGING Option: With the NOLOGGING option provided while creating index you can restrict database to generate a large redo log. NOLOGGING option generates minimal redo. So you will achieve higher performance.
create index table_1_I on table_1(id,code) parallel 5 nologging;

3)COMPRESS Option: With the COMPRESS option you will enable key compression, which eliminates repeated occurrence of key column values and may substantially reduce storage. Use integer to specify the prefix length (number of prefix columns to compress).
For unique indexes, the valid range of prefix length values is from 1 to the number of key columns minus 1. The default prefix length is the number of key columns minus 1.

For nonunique indexes, the valid range of prefix length values is from 1 to the number of key columns. The default prefix length is the number of key columns.

You can use as,
create index table_1_I on table_1(id,code) parallel 5 nologging compress;

4)Index in a bigger block: You can create an index in a tablespace that uses bigger block size. If you have DSS environment then you can do it. This will improve performance while creating index.

You can do it by first creating a tablespace with 32k blocksize and then create index under it,

create tablespace 32k_ts
datafile '/u01/32k_file.dbf'
blocksize 32k;


create index table_1_I on table_1(id,code) parallel 5 nologging compress tablespace 32K;

ORA-01779: cannot modify a column which maps to a non key-preserved table

Problem Description
I wanted to update a table column based on another table data but it fails with error
ORA-01779: cannot modify a column which maps to a non key-preserved table.

SQL> update
(select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id)
set col1=col2;
set col1=col2
*
ERROR at line 4:
ORA-01779: cannot modify a column which maps to a non key-preserved table

Cause, Description and Solution of the Problem
Let's look both of table's data.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Rows to be updated

SQL> select * from table_2;

ID CODE
---------- --------------------
2 Second Row

From the above update query, the select part return following rows,


SQL> select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id;

COL1 COL2
-------------------- --------------------
Rows to be updated Second Row

In this case col1 value will be replaced by col2 value. Now if source table's id column is not unique then see the below case:

SQL> insert into table_2 values(2,'Test Row');

1 row created.

SQL> select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id;


COL1 COL2
-------------------- --------------------
Rows to be updated Second Row
Rows to be updated Test Row

Now the col1 data "Rows to be updated" will be replaced by which value of col2? "Second Row" or "Test Row" which is ambiguous. So in order to update col1 in this way the id of the table containing col2 must be unique so that ambiguous situation will not occur.

SQL> rollback;

Rollback complete.

Adding an unique constraint will solve the problem.
SQL> alter table table_2 add constraint table_2_uk unique(id);

Table altered.

Now we have unique on id. So no ambiguous situation will occur.

SQL> select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id;

COL1 COL2
-------------------- --------------------
Rows to be updated Second Row

So update will work fine.

SQL> update
(select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id)
set col1=col2;

1 row updated.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Second Row

ORA-01779: cannot modify a column which maps to a non key-preserved table

Problem Description
I wanted to update a table column based on another table data but it fails with error
ORA-01779: cannot modify a column which maps to a non key-preserved table.

SQL> update
(select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id)
set col1=col2;
set col1=col2
*
ERROR at line 4:
ORA-01779: cannot modify a column which maps to a non key-preserved table

Cause, Description and Solution of the Problem
Let's look both of table's data.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Rows to be updated

SQL> select * from table_2;

ID CODE
---------- --------------------
2 Second Row

From the above update query, the select part return following rows,


SQL> select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id;

COL1 COL2
-------------------- --------------------
Rows to be updated Second Row

In this case col1 value will be replaced by col2 value. Now if source table's id column is not unique then see the below case:

SQL> insert into table_2 values(2,'Test Row');

1 row created.

SQL> select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id;


COL1 COL2
-------------------- --------------------
Rows to be updated Second Row
Rows to be updated Test Row

Now the col1 data "Rows to be updated" will be replaced by which value of col2? "Second Row" or "Test Row" which is ambiguous. So in order to update col1 in this way the id of the table containing col2 must be unique so that ambiguous situation will not occur.

SQL> rollback;

Rollback complete.

Adding an unique constraint will solve the problem.
SQL> alter table table_2 add constraint table_2_uk unique(id);

Table altered.

Now we have unique on id. So no ambiguous situation will occur.

SQL> select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id;

COL1 COL2
-------------------- --------------------
Rows to be updated Second Row

So update will work fine.

SQL> update
(select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id)
set col1=col2;

1 row updated.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Second Row

ORA-12838: cannot read/modify an object after modifying it in parallel

Problem Description
SQL> INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
ID.NEXTVAL, COUNTRY,CODE,PURCHASE_RATE, 4 from STATION_RATE_DUMMY;


7561 rows created.

SQL> INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
ID.NEXTVAL, COUNTRY,CODE,PURCHASE_RATE, 5 from STATION_RATE_DUMMY;

INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

Cause of the Problem
A table is modified in parallel or with direct path load in a transaction. Now within the same transaction if an attempt was made to read or modification statements on a table then ORA-12838 will occur. In oracle within same transaction table is modified with direct path load or parallel and then access of it is not permitted.

Solution of the Problem
Break up the transaction into two or rewrite the transaction. You can break the transaction into two simply after doing a commit after direct path load or parallel modification of the table.

SQL> commit;

Commit complete.

SQL> INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
ID.NEXTVAL, COUNTRY,CODE,PURCHASE_RATE, 5 from STATION_RATE_DUMMY;


7561 rows created.

As now data is loaded direct path load so we can't read data from the table unless we do a commit.

SQL> select count(*) from station_rate;
select count(*) from station_rate
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


SQL> commit;

Commit complete.

SQL> select count(*) from station_rate;

COUNT(*)
----------
53228

ORA-12838: cannot read/modify an object after modifying it in parallel

Problem Description
SQL> INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
ID.NEXTVAL, COUNTRY,CODE,PURCHASE_RATE, 4 from STATION_RATE_DUMMY;


7561 rows created.

SQL> INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
ID.NEXTVAL, COUNTRY,CODE,PURCHASE_RATE, 5 from STATION_RATE_DUMMY;

INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

Cause of the Problem
A table is modified in parallel or with direct path load in a transaction. Now within the same transaction if an attempt was made to read or modification statements on a table then ORA-12838 will occur. In oracle within same transaction table is modified with direct path load or parallel and then access of it is not permitted.

Solution of the Problem
Break up the transaction into two or rewrite the transaction. You can break the transaction into two simply after doing a commit after direct path load or parallel modification of the table.

SQL> commit;

Commit complete.

SQL> INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
ID.NEXTVAL, COUNTRY,CODE,PURCHASE_RATE, 5 from STATION_RATE_DUMMY;


7561 rows created.

As now data is loaded direct path load so we can't read data from the table unless we do a commit.

SQL> select count(*) from station_rate;
select count(*) from station_rate
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


SQL> commit;

Commit complete.

SQL> select count(*) from station_rate;

COUNT(*)
----------
53228