Thursday, January 29, 2009

ORA-02070: database does not support in this context

Problem Description
I like to differentiate between columns of table HISTORY from two databases using database link as below and it resulted error ORA-02070.
SQL> select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY';
select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY'
*
ERROR at line 1:
ORA-02070: database TIGER1.NET does not support operator NLS_CHARSET_DECL_LEN
in this context

I found that the cols contain LONG column and hence result above error. If I only query long datatype data_default then same error return with different error message.

SQL> select data_default from cols minus select data_default from cols@tiger1.net;
select data_default from cols minus select data_default from cols@tiger1.net
*
ERROR at line 1:
ORA-02070: database does not support in this context

Experiment and cause of the Error
The reason behind the error is the long datatype of data_default column. Wheenever you go to access long datatype column of cols then above error returns. However for experiment I created LONG datatype column table but did not get above error. I conclude that only SYSTEM defined table/view/synonym throw above error and user defined table/view/synonym throw ORA-00997.

For experiment,
1)Create table t with only long column in both database and then try to use minus operation using database link.

SQL> create table t(a long);

Table created.

SQL> select * from t minus select * from t@tiger1.net;
select * from t minus select * from t@tiger1.net
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype
ORA-02063: preceding line from TIGER1.NET

So in order to avoid above error just don't do any operation over LONG column through database link.

ORA-02070: database does not support in this context

Problem Description
I like to differentiate between columns of table HISTORY from two databases using database link as below and it resulted error ORA-02070.
SQL> select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY';
select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY'
*
ERROR at line 1:
ORA-02070: database TIGER1.NET does not support operator NLS_CHARSET_DECL_LEN
in this context

I found that the cols contain LONG column and hence result above error. If I only query long datatype data_default then same error return with different error message.

SQL> select data_default from cols minus select data_default from cols@tiger1.net;
select data_default from cols minus select data_default from cols@tiger1.net
*
ERROR at line 1:
ORA-02070: database does not support in this context

Experiment and cause of the Error
The reason behind the error is the long datatype of data_default column. Wheenever you go to access long datatype column of cols then above error returns. However for experiment I created LONG datatype column table but did not get above error. I conclude that only SYSTEM defined table/view/synonym throw above error and user defined table/view/synonym throw ORA-00997.

For experiment,
1)Create table t with only long column in both database and then try to use minus operation using database link.

SQL> create table t(a long);

Table created.

SQL> select * from t minus select * from t@tiger1.net;
select * from t minus select * from t@tiger1.net
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype
ORA-02063: preceding line from TIGER1.NET

So in order to avoid above error just don't do any operation over LONG column through database link.

Interact with the executing data pump jobs in oracle

Oracle data pump export jobs run on server machine. Sometimes from remote machine you run data pump jobs on server or in server machine you do it. If you close your window then still data pump export operation runs background in the server machine. You may wish to control your data pump jobs that is running in the background and wish to know the status of it like which objects now it is processing or may wish to kill the job.

In order to do that first be sure which jobs are running now and see the state by querying from dba_datapump_jobs view.

SQL> select job_name,state from dba_datapump_jobs;

JOB_NAME STATE
------------------------------ ------------------------------
SYS_EXPORT_SCHEMA_01 EXECUTING

You may also wish to see other fields of the view like owner_name, operation etc.

Now in order to go to the interactive mode of the executing jobs just ATTCH the executing jobs like below,

SQL> $expdp maximsg/a attach=SYS_EXPORT_SCHEMA_01

Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 11:43:49

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options

Job: SYS_EXPORT_SCHEMA_01
Owner: MAXIMSG
Operation: EXPORT
Creator Privs: FALSE
GUID: BF0614AD15254C7B964B78333B7D9E1A
Start Time: Thursday, 01 January, 2009 11:38:30
Mode: SCHEMA
Instance: tiger
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,dat
a_pump_dir%U.dmp
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: E:\oracle\product\10.2.0\admin\tiger\dpdump\data_pump_dir%u.dmp
Dump File: D:\PART1.DMP
bytes written: 704,512
Dump File: E:\PART2.DMP
bytes written: 4,096
Dump File: E:\ORACLE\PRODUCT\10.2.0\ADMIN\TIGER\DPDUMP\DATA_PUMP_DIR01.DMP
bytes written: 4,096

Worker 1 Status:
State: EXECUTING
Object Schema: MAXIMSG
Object Name: ACCOUNTING
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 171
Completed Rows: 8,169,569
Worker Parallelism: 1

Now you have reached in interactive mode. In order to back to the command line mode just write continue_client.
Export> continue_client

In order to kill the job that is executing,
Export> kill_job
Are you sure you wish to stop this job ([yes]/no): yes

In this way through ATTACH command you can interact with the job that is running. Similarly you can interact the data pump import operation.

ORA-39095: Dump file space has been exhausted

Problem Description
I ran my datapump export operation with the combination of multiple dumpfile arguments and filesize parameter but it fails with ORA-39095 as below.

maximsg@TIGER> $expdp maximsg/a dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespaces=users

Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:23:30

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_05": maximsg/******** dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespace
s=users
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 188416 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_05" stopped due to fatal error at 12:23:47

Just a variant of the above is below.
>expdp maximsg/a@tiger tablespaces=users filesize=200K directory=d dumpfile=b.dmp

Export: Release 10.2.0.1.0 - Production on Saturday, 03 January, 2009 22:29:34

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_08": maximsg/********@tiger tablespaces=users filesize=200K directory=d dumpfile=b.d
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 208896 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_08" stopped due to fatal error at 10:30:17

Note that both in first and second case same errors are return but they show various bytes in order of unable to allocate. In first case Unable to allocate 188416 bytes and in second case Unable to allocate 208896 bytes.

Cause of the Error
They are many cause for which ORA-39095 is fired.
1)If the summation of dumpfile sizes that is specified in the datapump parameters is less than total dumpfile size that produces then ORA-39095 returns. Like in the second case the filesize is specified as 200K and only one dumpfile is specified. So maximum dumpfile size that can be produced is no more than 200K. If dumpfile to be produced is greater than 100K then above error will return.

Note that %U specification for the dump file can expand up to 99 files.
If 99 files have been generated before the export has completed, it will again return the ORA-39095 error. If in the second case %U is specified then maximum dumpfile size can be 99*200K=19800K. If dumpfile to be produced is greater than 19800K then error will return.


2)Like in the first case the size of filesize is too small and hence the error returns. filesize is defined about 184K(188407 bytes). And in the error message it shows unable to allocate 184K(188407 bytes). The solutions is and extra block is needed. As with 4k (with standard block size 8k) a block is generated so 188K filesize will solve the problem.


maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=users

Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:25:09

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_06": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."OUT_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."OUT_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."OUT_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_06 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
C:\REST04.DMP
C:\REST05.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully completed at 12:25:32


Solution of the Error
1)Increase the total dumpfile number so that dump produced by export operation is lower than total dumpfile sizes that is specified in the expdp.
With the 'wildcard' specification (%U) the dump file can expand up to 99 files.

2)Use a bigger filesize value.

3)Another option is to use,
dumpfile=dmpdir1:part1_%U.dmp, dmpdir2:part2_%U.dmp, dmpdir3:part3_%U.dmp

ORA-00054: resource busy and acquire with NOWAIT specified

Problem Description
In my production database Oracle 10.2g while I was adding column to one of my transaction table it fails with ORA-54 error as below.

SQL> alter table student add b number;
alter table student add b number
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified

Description of the Problem
Let's try to produce the problem in our development environment. I have opened two session that connected to database under a schema. In one session, I have created a table and inserted data into it.
SQL> create table a (a number);

Table created.

SQL> insert into a values(1);

1 row created.

I did not yet committed data in session 1. Now in another session whenever I try to any ddl like (alter table, drop table) ORA-00054 will produce.

In another session,
SQL> alter table a add b number;
alter table a add b number
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified


SQL> drop table a;

drop table a
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified

SQL> lock table a in exclusive mode nowait;

lock table a in exclusive mode nowait
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified


Cause of the Problem
Whenever you try to do any structural changes on a table oracle try to lock the table exclusively with NOWAIT option(this is in 10.2g while in 11g you can change the wait timeout). If oracle fails to lock the table exclusively then ORA-00054 will occur.

Solution of the Problem
In 10.2g you are limited to several choices to solve the problem. To avoid it,

-Re run the DDL at a later time when the database become idle.
or,

-Kill the sessions that are preventing the exclusive lock.
or,

-Prevent end user to connect to the database and then run the DDL.

You have different views to see locking information about the table.
1)DBA_BLOCKERS: Displays a session if it is not waiting for a locked object but is holding a lock on an object for which another session is waiting. In our scenario this view will not help.

2)DBA_DDL_LOCKS: It lists all DDL locks held in the database and all outstanding requests for a DDL lock.

3)DBA_DML_LOCKS: It lists all DML locks held in the database and all outstanding requests for a DML lock.
If you query from it in the mode_held field you will see 'row exclusive lock'.
SQL> select mode_held from dba_dml_locks where owner='MAXIMSG';

MODE_HELD
-------------
Row-X (SX)

4)DBA_LOCK: It lists all locks or latches held in the database, and all outstanding requests for a lock or latch.

5)DBA_LOCK_INTERNAL: It displays a row for each lock or latch that is being held, and one row for each outstanding request for a lock or latch.

6)DBA_LOCKS is a synonym for DBA_LOCK.

7)DBA_WAITERS: Shows all the sessions that are waiting for a lock.

In order to see locked object query,

SQL> set linesize 130
SQL> set pages 100
SQL> col username format a20
SQL> col sess_id format a10
SQL> col object format a25
SQL> col mode_held format a10
SQL> select oracle_username || ' (' || s.osuser || ')' username
2 , s.sid || ',' || s.serial# sess_id
3 , owner || '.' || object_name object
4 , object_type
5 , decode( l.block
6 , 0, 'Not Blocking'
7 , 1, 'Blocking'
8 , 2, 'Global') status
9 , decode(v.locked_mode
10 , 0, 'None'
11 , 1, 'Null'
12 , 2, 'Row-S (SS)'
13 , 3, 'Row-X (SX)'
14 , 4, 'Share'
15 , 5, 'S/Row-X (SSX)'
16 , 6, 'Exclusive', TO_CHAR(lmode)) mode_held
17 from v$locked_object v
18 , dba_objects d
19 , v$lock l
20 , v$session s
21 where v.object_id = d.object_id
22 and v.object_id = l.id1
23 and v.session_id = s.sid
24 order by oracle_username
25 , session_id
26 /

USERNAME SESS_ID OBJECT OBJECT_TYPE STATUS MODE_HELD
-------------------- ---------- ------------------------- ------------------- ------------ ----------
MAXIMSG (A) 142,232 MAXIMSG.A TABLE Not Blocking Row-X (SX)
OMS (DBA2\ershad) 143,280 OMS.T TABLE Not Blocking Row-X (SX)
OMS (DBA2\ershad) 143,280 OMS.T1 TABLE Not Blocking Row-X (SX)

Here we see there is 3 types of locking. In our case the object is A. And we see the sid=142 and serial#=232 is preventing us from locking the table in exclusive mode. So to achieve our task we can kill the session as below.

SQL> alter system kill session '142, 232';

System altered.

SQL> alter table a add b number;

Table altered.

Now in the first session whenever you try to access error will be generated saying that session has been killed. In the first session.
SQL> commit;
commit
*
ERROR at line 1:
ORA-00028: your session has been killed

ORA-39095: Dump file space has been exhausted

Problem Description
I ran my datapump export operation with the combination of multiple dumpfile arguments and filesize parameter but it fails with ORA-39095 as below.

maximsg@TIGER> $expdp maximsg/a dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespaces=users

Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:23:30

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_05": maximsg/******** dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespace
s=users
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 188416 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_05" stopped due to fatal error at 12:23:47

Just a variant of the above is below.
>expdp maximsg/a@tiger tablespaces=users filesize=200K directory=d dumpfile=b.dmp

Export: Release 10.2.0.1.0 - Production on Saturday, 03 January, 2009 22:29:34

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_08": maximsg/********@tiger tablespaces=users filesize=200K directory=d dumpfile=b.d
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 208896 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_08" stopped due to fatal error at 10:30:17

Note that both in first and second case same errors are return but they show various bytes in order of unable to allocate. In first case Unable to allocate 188416 bytes and in second case Unable to allocate 208896 bytes.

Cause of the Error
They are many cause for which ORA-39095 is fired.
1)If the summation of dumpfile sizes that is specified in the datapump parameters is less than total dumpfile size that produces then ORA-39095 returns. Like in the second case the filesize is specified as 200K and only one dumpfile is specified. So maximum dumpfile size that can be produced is no more than 200K. If dumpfile to be produced is greater than 100K then above error will return.

Note that %U specification for the dump file can expand up to 99 files.
If 99 files have been generated before the export has completed, it will again return the ORA-39095 error. If in the second case %U is specified then maximum dumpfile size can be 99*200K=19800K. If dumpfile to be produced is greater than 19800K then error will return.


2)Like in the first case the size of filesize is too small and hence the error returns. filesize is defined about 184K(188407 bytes). And in the error message it shows unable to allocate 184K(188407 bytes). The solutions is and extra block is needed. As with 4k (with standard block size 8k) a block is generated so 188K filesize will solve the problem.


maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=users

Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:25:09

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_06": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."OUT_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."OUT_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."OUT_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_06 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
C:\REST04.DMP
C:\REST05.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully completed at 12:25:32


Solution of the Error
1)Increase the total dumpfile number so that dump produced by export operation is lower than total dumpfile sizes that is specified in the expdp.
With the 'wildcard' specification (%U) the dump file can expand up to 99 files.

2)Use a bigger filesize value.

3)Another option is to use,
dumpfile=dmpdir1:part1_%U.dmp, dmpdir2:part2_%U.dmp, dmpdir3:part3_%U.dmp

ORA-00054: resource busy and acquire with NOWAIT specified

Problem Description
In my production database Oracle 10.2g while I was adding column to one of my transaction table it fails with ORA-54 error as below.

SQL> alter table student add b number;
alter table student add b number
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified

Description of the Problem
Let's try to produce the problem in our development environment. I have opened two session that connected to database under a schema. In one session, I have created a table and inserted data into it.
SQL> create table a (a number);

Table created.

SQL> insert into a values(1);

1 row created.

I did not yet committed data in session 1. Now in another session whenever I try to any ddl like (alter table, drop table) ORA-00054 will produce.

In another session,
SQL> alter table a add b number;
alter table a add b number
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified


SQL> drop table a;

drop table a
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified

SQL> lock table a in exclusive mode nowait;

lock table a in exclusive mode nowait
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified


Cause of the Problem
Whenever you try to do any structural changes on a table oracle try to lock the table exclusively with NOWAIT option(this is in 10.2g while in 11g you can change the wait timeout). If oracle fails to lock the table exclusively then ORA-00054 will occur.

Solution of the Problem
In 10.2g you are limited to several choices to solve the problem. To avoid it,

-Re run the DDL at a later time when the database become idle.
or,

-Kill the sessions that are preventing the exclusive lock.
or,

-Prevent end user to connect to the database and then run the DDL.

You have different views to see locking information about the table.
1)DBA_BLOCKERS: Displays a session if it is not waiting for a locked object but is holding a lock on an object for which another session is waiting. In our scenario this view will not help.

2)DBA_DDL_LOCKS: It lists all DDL locks held in the database and all outstanding requests for a DDL lock.

3)DBA_DML_LOCKS: It lists all DML locks held in the database and all outstanding requests for a DML lock.
If you query from it in the mode_held field you will see 'row exclusive lock'.
SQL> select mode_held from dba_dml_locks where owner='MAXIMSG';

MODE_HELD
-------------
Row-X (SX)

4)DBA_LOCK: It lists all locks or latches held in the database, and all outstanding requests for a lock or latch.

5)DBA_LOCK_INTERNAL: It displays a row for each lock or latch that is being held, and one row for each outstanding request for a lock or latch.

6)DBA_LOCKS is a synonym for DBA_LOCK.

7)DBA_WAITERS: Shows all the sessions that are waiting for a lock.

In order to see locked object query,

SQL> set linesize 130
SQL> set pages 100
SQL> col username format a20
SQL> col sess_id format a10
SQL> col object format a25
SQL> col mode_held format a10
SQL> select oracle_username || ' (' || s.osuser || ')' username
2 , s.sid || ',' || s.serial# sess_id
3 , owner || '.' || object_name object
4 , object_type
5 , decode( l.block
6 , 0, 'Not Blocking'
7 , 1, 'Blocking'
8 , 2, 'Global') status
9 , decode(v.locked_mode
10 , 0, 'None'
11 , 1, 'Null'
12 , 2, 'Row-S (SS)'
13 , 3, 'Row-X (SX)'
14 , 4, 'Share'
15 , 5, 'S/Row-X (SSX)'
16 , 6, 'Exclusive', TO_CHAR(lmode)) mode_held
17 from v$locked_object v
18 , dba_objects d
19 , v$lock l
20 , v$session s
21 where v.object_id = d.object_id
22 and v.object_id = l.id1
23 and v.session_id = s.sid
24 order by oracle_username
25 , session_id
26 /

USERNAME SESS_ID OBJECT OBJECT_TYPE STATUS MODE_HELD
-------------------- ---------- ------------------------- ------------------- ------------ ----------
MAXIMSG (A) 142,232 MAXIMSG.A TABLE Not Blocking Row-X (SX)
OMS (DBA2\ershad) 143,280 OMS.T TABLE Not Blocking Row-X (SX)
OMS (DBA2\ershad) 143,280 OMS.T1 TABLE Not Blocking Row-X (SX)

Here we see there is 3 types of locking. In our case the object is A. And we see the sid=142 and serial#=232 is preventing us from locking the table in exclusive mode. So to achieve our task we can kill the session as below.

SQL> alter system kill session '142, 232';

System altered.

SQL> alter table a add b number;

Table altered.

Now in the first session whenever you try to access error will be generated saying that session has been killed. In the first session.
SQL> commit;
commit
*
ERROR at line 1:
ORA-00028: your session has been killed

Updating a table based on another table

In many times we need to update column(s) of a table based on the data of another table column(s). There are several ways to do the task.

Let's show example by creating table and entering values into it.

Create table table_1(id number, code varchar2(20));
insert into table_1 values(1,'First Row');
insert into table_1 values(2, 'Rows to be updated');
Create table table_2(id number, code varchar2(20));
insert into table_2 values(2,'Second Row');

After above statements let's look at the data on the table.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Rows to be updated

SQL> select * from table_2;

ID CODE
---------- --------------------
2 Second Row

Now my requirement is to update table_1 based on table_2 id column data. If corresponding id in table_1 exist then that row's code will be updated.

Method 01:

SQL> update table_1 set code=
(select t2.code from table_2 t2 JOIN table_1 t1 ON t1.id=t2.id)
where table_1.id in(select id from table_2);


1 row updated.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Second Row

Method 02:

SQL> update table_1 t1 set code=
(select t2.code from table_2 t2 JOIN table_1 t1 ON t2.id=t1.id)
where exists
(select t2.code from table_2 t2 where t1.id=t2.id);


1 row updated.

SQL> select * from table_1;


ID CODE
---------- --------------------
1 First Row
2 Second Row


Method 03:

In order to apply method 03 you need a primary or unique key column in the source table i.e in the table from where we are fetching data for update. It is needed because if this CONSTRAINT is not there then it will result in multiple rows which will create an ambiguous situation.

So, I am adding an unique constraint in table_2.

SQL> alter table table_2 add constraint table_2_UK UNIQUE (id);

Table altered.

SQL> update
(select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id)
set col1=col2;


1 row updated.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Second Row


In most cases method 03 will perform better than other method.

Updating a table based on another table

In many times we need to update column(s) of a table based on the data of another table column(s). There are several ways to do the task.

Let's show example by creating table and entering values into it.

Create table table_1(id number, code varchar2(20));
insert into table_1 values(1,'First Row');
insert into table_1 values(2, 'Rows to be updated');
Create table table_2(id number, code varchar2(20));
insert into table_2 values(2,'Second Row');

After above statements let's look at the data on the table.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Rows to be updated

SQL> select * from table_2;

ID CODE
---------- --------------------
2 Second Row

Now my requirement is to update table_1 based on table_2 id column data. If corresponding id in table_1 exist then that row's code will be updated.

Method 01:

SQL> update table_1 set code=
(select t2.code from table_2 t2 JOIN table_1 t1 ON t1.id=t2.id)
where table_1.id in(select id from table_2);


1 row updated.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Second Row

Method 02:

SQL> update table_1 t1 set code=
(select t2.code from table_2 t2 JOIN table_1 t1 ON t2.id=t1.id)
where exists
(select t2.code from table_2 t2 where t1.id=t2.id);


1 row updated.

SQL> select * from table_1;


ID CODE
---------- --------------------
1 First Row
2 Second Row


Method 03:

In order to apply method 03 you need a primary or unique key column in the source table i.e in the table from where we are fetching data for update. It is needed because if this CONSTRAINT is not there then it will result in multiple rows which will create an ambiguous situation.

So, I am adding an unique constraint in table_2.

SQL> alter table table_2 add constraint table_2_UK UNIQUE (id);

Table altered.

SQL> update
(select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id)
set col1=col2;


1 row updated.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Second Row


In most cases method 03 will perform better than other method.

Formatting SQL*Plus Reports Part 2

Computing Summary of a column
With combination of BREAK and COMPUTE command you can do several calculations on SQL reports using sql*plus.
Remember the COMPUTE command has no effect without a corresponding BREAK command.

Below is the lists of compute functions of Sql*plus and their effects.

1)SUM: Sum of the values in the column.

2)MINIMUM: Minimum value in the column.

3)MAXIMUM: Maximum value in the column.

4)AVG: Average of the values in the column.

5)STD: Standard deviation of the values in the column.

6)VARIANCE: Variance of the values in the column.

7)COUNT: Number of non-null values in the column.

8)NUMBER: Number of rows in the column.

Let's look at our data,

SQL> select dept_id,name,salary from emp;

DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
10 ABDUL 22000
10 KHALEEL 40000
20 IBRAHIM 35000
20 RAVI 60000
30 KASARAPU 25000

6 rows selected.

Now we wish to compute the total of SALARY by department. To do that use,

SQL> break on dept_id
SQL> compute sum of salary on dept_id
SQL> /


DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
ABDUL 22000
KHALEEL 40000

********** ----------
sum 122000
20 IBRAHIM 35000
RAVI 60000

********** ----------
sum 95000
30 KASARAPU 25000
********** ----------
sum 25000

6 rows selected.


Note that the word sum appears in every break. If you don't want to print sum word then do as,

SQL> COLUMN DUMMY NOPRINT;
SQL> COMPUTE SUM OF SALARY ON DUMMY;
SQL> BREAK ON DUMMY SKIP 1;
SQL> select dept_id DUMMY,dept_id,name,salary from emp;


DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
10 ABDUL 22000
10 KHALEEL 40000
----------
122000

20 IBRAHIM 35000
20 RAVI 60000

----------
95000
30 KASARAPU 25000

----------
25000
6 rows selected.

To compute the salaries just at the end of the report,

SQL> COLUMN DUMMY NOPRINT;
SQL> COMPUTE SUM OF SALARY ON DUMMY;
SQL> BREAK ON DUMMY;
SQL> SELECT NULL DUMMY,DEPT_ID,NAME,SALARY FROM EMP;


DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
10 ABDUL 22000
10 KHALEEL 40000
20 IBRAHIM 35000
20 RAVI 60000
30 KASARAPU 25000

----------
242000

6 rows selected.


To calculate grand total of salary and make it a level do,

SQL> BREAK ON REPORT
SQL> COMPUTE SUM LABEL TOTAL OF SALARY ON REPORT
SQL> select dept_id,name,salary from emp;


DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
10 ABDUL 22000
10 KHALEEL 40000
20 IBRAHIM 35000
20 RAVI 60000
30 KASARAPU 25000

----------
TOTAL 242000

6 rows selected.

To compute the both average and sum of salaries of a department do,

SQL> BREAK ON DEPT_ID
SQL> COMPUTE AVG SUM OF SALARY ON DEPT_ID
SQL> select dept_id,name,salary from emp where dept_id=10;


DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
ABDUL 22000
KHALEEL 40000

********** ----------
avg 40666.6667
sum 122000


To see current compute settings,
SQL> compute
COMPUTE sum LABEL 'sum' OF SALARY ON DUMMY
COMPUTE sum LABEL 'TOTAL' OF SALARY ON REPORT
COMPUTE avg LABEL 'avg' sum LABEL 'sum' OF SALARY ON DEPT_ID
To remove all computes definitions,
SQL> clear compute
computes cleared

Formatting SQL*Plus Reports Part 2

Computing Summary of a column
With combination of BREAK and COMPUTE command you can do several calculations on SQL reports using sql*plus.
Remember the COMPUTE command has no effect without a corresponding BREAK command.

Below is the lists of compute functions of Sql*plus and their effects.

1)SUM: Sum of the values in the column.

2)MINIMUM: Minimum value in the column.

3)MAXIMUM: Maximum value in the column.

4)AVG: Average of the values in the column.

5)STD: Standard deviation of the values in the column.

6)VARIANCE: Variance of the values in the column.

7)COUNT: Number of non-null values in the column.

8)NUMBER: Number of rows in the column.

Let's look at our data,

SQL> select dept_id,name,salary from emp;

DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
10 ABDUL 22000
10 KHALEEL 40000
20 IBRAHIM 35000
20 RAVI 60000
30 KASARAPU 25000

6 rows selected.

Now we wish to compute the total of SALARY by department. To do that use,

SQL> break on dept_id
SQL> compute sum of salary on dept_id
SQL> /


DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
ABDUL 22000
KHALEEL 40000

********** ----------
sum 122000
20 IBRAHIM 35000
RAVI 60000

********** ----------
sum 95000
30 KASARAPU 25000
********** ----------
sum 25000

6 rows selected.


Note that the word sum appears in every break. If you don't want to print sum word then do as,

SQL> COLUMN DUMMY NOPRINT;
SQL> COMPUTE SUM OF SALARY ON DUMMY;
SQL> BREAK ON DUMMY SKIP 1;
SQL> select dept_id DUMMY,dept_id,name,salary from emp;


DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
10 ABDUL 22000
10 KHALEEL 40000
----------
122000

20 IBRAHIM 35000
20 RAVI 60000

----------
95000
30 KASARAPU 25000

----------
25000
6 rows selected.

To compute the salaries just at the end of the report,

SQL> COLUMN DUMMY NOPRINT;
SQL> COMPUTE SUM OF SALARY ON DUMMY;
SQL> BREAK ON DUMMY;
SQL> SELECT NULL DUMMY,DEPT_ID,NAME,SALARY FROM EMP;


DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
10 ABDUL 22000
10 KHALEEL 40000
20 IBRAHIM 35000
20 RAVI 60000
30 KASARAPU 25000

----------
242000

6 rows selected.


To calculate grand total of salary and make it a level do,

SQL> BREAK ON REPORT
SQL> COMPUTE SUM LABEL TOTAL OF SALARY ON REPORT
SQL> select dept_id,name,salary from emp;


DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
10 ABDUL 22000
10 KHALEEL 40000
20 IBRAHIM 35000
20 RAVI 60000
30 KASARAPU 25000

----------
TOTAL 242000

6 rows selected.

To compute the both average and sum of salaries of a department do,

SQL> BREAK ON DEPT_ID
SQL> COMPUTE AVG SUM OF SALARY ON DEPT_ID
SQL> select dept_id,name,salary from emp where dept_id=10;


DEPT_ID NAME SALARY
---------- ------------ ----------
10 SHAIK 60000
ABDUL 22000
KHALEEL 40000

********** ----------
avg 40666.6667
sum 122000


To see current compute settings,
SQL> compute
COMPUTE sum LABEL 'sum' OF SALARY ON DUMMY
COMPUTE sum LABEL 'TOTAL' OF SALARY ON REPORT
COMPUTE avg LABEL 'avg' sum LABEL 'sum' OF SALARY ON DEPT_ID
To remove all computes definitions,
SQL> clear compute
computes cleared

Controlling the Autotrace Report in sql*plus

In sql*plus with AUTOTRACE command you can see the execution plan of successful SQL DML (that is, SELECT, DELETE, UPDATE and INSERT) statements which is useful for monitoring and tuning the performance of these statements.

You can control the report by setting the AUTOTRACE system variable.

Following is the available AUTOTRACE settings.

1)SET AUTOTRACE OFF: No AUTOTRACE report is generated. This is the default. Result of a query will be displayed on the console as usual.

2)SET AUTOTRACE ON EXPLAIN: The AUTOTRACE report shows the optimizer execution path along with query output and note but does not show statistics.

3)SET AUTOTRACE ON STATISTICS:
The AUTOTRACE report shows the SQL statement execution statistics along with query output but does not display execution plan.

4)SET AUTOTRACE ON: The AUTOTRACE report includes both the optimizer execution path and the SQL statement execution statistics along with query output.

5)SET AUTOTRACE TRACEONLY: Report displays both optimizer execution path as well as execution statistics but does not display query output. If STATISTICS is enabled, query data is still fetched, but not printed.

Understanding Execution Plan Statistics

Below is the output of an execution plan statistics.
SQL> set autot trace statistics
SQL> select * from tab;


107 rows selected.


Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
898 consistent gets
0 physical reads
0 redo size
3459 bytes sent via SQL*Net to client
458 bytes received via SQL*Net from client
9 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
107 rows processed

You sometimes want to know what these fields indicates. Below is the details of these fields.

1)recursive calls: For processing of sql statements oracle database maintain internal tables. Sometimes for processing sql statements change is needed in the internal tables and sometimes not. When Oracle Database needs to make a change to these tables, it internally generates an internal SQL statement, which in turn generates a recursive call.


2)db block gets: Number of times a CURRENT block was requested.

3)consistent gets: Number of times a consistent read was requested for a block. This is called the logical reads indicates for processing of a query how many blocks needs to be accessed.

4)physical reads: Total number of data blocks read from disk. This number equals the value of "physical reads direct" + all reads into buffer cache.

5)redo size: For processing of a query total amount of redo generated in bytes.

6)bytes sent through SQL*Net to client: Total number of bytes sent to the client from the foreground processes.

7)bytes received through SQL*Net from client: Total number of bytes received from the client over Oracle Net.

8)SQL*Net round-trips to/from client: Total number of Oracle Net messages sent to and received from the client.
9)sorts (memory):Number of sort operations that were performed completely in memory and did not require any disk writes.

10)sorts (disk):Number of sort operations that required at least one disk write.

11)rows processed: Number of rows processed during the operation.

Controlling the Autotrace Report in sql*plus

In sql*plus with AUTOTRACE command you can see the execution plan of successful SQL DML (that is, SELECT, DELETE, UPDATE and INSERT) statements which is useful for monitoring and tuning the performance of these statements.

You can control the report by setting the AUTOTRACE system variable.

Following is the available AUTOTRACE settings.

1)SET AUTOTRACE OFF: No AUTOTRACE report is generated. This is the default. Result of a query will be displayed on the console as usual.

2)SET AUTOTRACE ON EXPLAIN: The AUTOTRACE report shows the optimizer execution path along with query output and note but does not show statistics.

3)SET AUTOTRACE ON STATISTICS:
The AUTOTRACE report shows the SQL statement execution statistics along with query output but does not display execution plan.

4)SET AUTOTRACE ON: The AUTOTRACE report includes both the optimizer execution path and the SQL statement execution statistics along with query output.

5)SET AUTOTRACE TRACEONLY: Report displays both optimizer execution path as well as execution statistics but does not display query output. If STATISTICS is enabled, query data is still fetched, but not printed.

Understanding Execution Plan Statistics

Below is the output of an execution plan statistics.
SQL> set autot trace statistics
SQL> select * from tab;


107 rows selected.


Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
898 consistent gets
0 physical reads
0 redo size
3459 bytes sent via SQL*Net to client
458 bytes received via SQL*Net from client
9 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
107 rows processed

You sometimes want to know what these fields indicates. Below is the details of these fields.

1)recursive calls: For processing of sql statements oracle database maintain internal tables. Sometimes for processing sql statements change is needed in the internal tables and sometimes not. When Oracle Database needs to make a change to these tables, it internally generates an internal SQL statement, which in turn generates a recursive call.


2)db block gets: Number of times a CURRENT block was requested.

3)consistent gets: Number of times a consistent read was requested for a block. This is called the logical reads indicates for processing of a query how many blocks needs to be accessed.

4)physical reads: Total number of data blocks read from disk. This number equals the value of "physical reads direct" + all reads into buffer cache.

5)redo size: For processing of a query total amount of redo generated in bytes.

6)bytes sent through SQL*Net to client: Total number of bytes sent to the client from the foreground processes.

7)bytes received through SQL*Net from client: Total number of bytes received from the client over Oracle Net.

8)SQL*Net round-trips to/from client: Total number of Oracle Net messages sent to and received from the client.
9)sorts (memory):Number of sort operations that were performed completely in memory and did not require any disk writes.

10)sorts (disk):Number of sort operations that required at least one disk write.

11)rows processed: Number of rows processed during the operation.

Expdp fails with ORA-39125, ORA-04031

Problem Description
I was performing data pump export operation and it fails with ORA-39125, ORA-04031 as below.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a

Export: Release 10.2.0.1.0 - Production on Tuesday, 16 December, 2008 17:07:44

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_FULL_02": full=y dumpfile=fulldb.dmp directory=d userid=system/********
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-04031: unable to allocate 28 bytes of shared memory ("shared pool","SELECT /*+rule*/ SYS_XMLGEN(...","sql area","ub1[]: qkexrXformVal")

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235

----- PL/SQL Call Stack -----
object line object
handle number name
64E077B4 14916 package body SYS.KUPW$WORKER
64E077B4 6300 package body SYS.KUPW$WORKER
64E077B4 9120 package body SYS.KUPW$WORKER
64E077B4 1880 package body SYS.KUPW$WORKER
64E077B4 6861 package body SYS.KUPW$WORKER
64E077B4 1262 package body SYS.KUPW$WORKER
64CB4398 2 anonymous block

Job "SYSTEM"."SYS_EXPORT_FULL_02" stopped due to fatal error at 17:08:03

Cause of the Error
The ORA-04031 error returns whenever oracle attempts to allocate a large piece of contiguous memory in the shared pool but fails to allocate. Oracle first flushes all objects that are not currently in use from the pool and the resulting free memory chunks are merged. If there is still not a single chunk large enough to satisfy the request the ORA-04031 error is returned.

Since in 10g automatic memory management in enabled and this allows the dba to reserve a pool of shared memory that is used to allocate the shared pool, the buffer cache, the java pool and the large pool with the single parameter SGA_TARGET. So simply increase of SGA_TARGET likely solve the problem.

Solution
Let's see the sga_target value.

SQL> show parameter sga_t

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 100M

And SGA_MAX_SIZE value,
SQL> show parameter sga_max
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 100M

We see these two parameter settings are low. So we increase it and restart database. As setting of SGA_TARGET is depend on SGA_MAX_SIZE and to change SGA_MAX_SIZE we need to bounce the database in order to effect.

SQL> alter system set sga_max_size=300M scope=spfile;

System altered.

SQL> alter system set sga_target=300M scope=spfile;

System altered.

SQL> startup force
ORACLE instance started.

Total System Global Area 314572800 bytes
Fixed Size 1248768 bytes
Variable Size 96469504 bytes
Database Buffers 209715200 bytes
Redo Buffers 7139328 bytes
Database mounted.
Database opened.

SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 300M

Now the export operation runs smoothly as shared pool finds enough memory to do the operation.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a

Expdp fails with ORA-39125, ORA-04031

Problem Description
I was performing data pump export operation and it fails with ORA-39125, ORA-04031 as below.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a

Export: Release 10.2.0.1.0 - Production on Tuesday, 16 December, 2008 17:07:44

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_FULL_02": full=y dumpfile=fulldb.dmp directory=d userid=system/********
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-04031: unable to allocate 28 bytes of shared memory ("shared pool","SELECT /*+rule*/ SYS_XMLGEN(...","sql area","ub1[]: qkexrXformVal")

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235

----- PL/SQL Call Stack -----
object line object
handle number name
64E077B4 14916 package body SYS.KUPW$WORKER
64E077B4 6300 package body SYS.KUPW$WORKER
64E077B4 9120 package body SYS.KUPW$WORKER
64E077B4 1880 package body SYS.KUPW$WORKER
64E077B4 6861 package body SYS.KUPW$WORKER
64E077B4 1262 package body SYS.KUPW$WORKER
64CB4398 2 anonymous block

Job "SYSTEM"."SYS_EXPORT_FULL_02" stopped due to fatal error at 17:08:03

Cause of the Error
The ORA-04031 error returns whenever oracle attempts to allocate a large piece of contiguous memory in the shared pool but fails to allocate. Oracle first flushes all objects that are not currently in use from the pool and the resulting free memory chunks are merged. If there is still not a single chunk large enough to satisfy the request the ORA-04031 error is returned.

Since in 10g automatic memory management in enabled and this allows the dba to reserve a pool of shared memory that is used to allocate the shared pool, the buffer cache, the java pool and the large pool with the single parameter SGA_TARGET. So simply increase of SGA_TARGET likely solve the problem.

Solution
Let's see the sga_target value.

SQL> show parameter sga_t

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 100M

And SGA_MAX_SIZE value,
SQL> show parameter sga_max
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 100M

We see these two parameter settings are low. So we increase it and restart database. As setting of SGA_TARGET is depend on SGA_MAX_SIZE and to change SGA_MAX_SIZE we need to bounce the database in order to effect.

SQL> alter system set sga_max_size=300M scope=spfile;

System altered.

SQL> alter system set sga_target=300M scope=spfile;

System altered.

SQL> startup force
ORACLE instance started.

Total System Global Area 314572800 bytes
Fixed Size 1248768 bytes
Variable Size 96469504 bytes
Database Buffers 209715200 bytes
Redo Buffers 7139328 bytes
Database mounted.
Database opened.

SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 300M

Now the export operation runs smoothly as shared pool finds enough memory to do the operation.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a

Troubleshoot ORA-02292, ORA-02449 and ORA-02266

SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKUSER_TASK) violated - child record
found


SQL> drop table task;
drop table task
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys


SQL> truncate table task;
truncate table task
*
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys

To get list of child table with the constraint name that is referencing the TASK table issue,

SQL> set lines 140
SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p
JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND p.table_name = UPPER('&table_name');
Enter value for table_name: task
old 7: AND p.table_name = UPPER('&table_name')
new 7: AND p.table_name = UPPER('task')

Parent Table Child Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TASK TASK_REF TASK_PK TASKREF_TASK
TASK TASK TASK_PK TASK_TASK_FK
TASK TASK_SCHEDULE TASK_PK TASKSCHEDULE_TASK
TASK TASK_USER TASK_PK TASKUSER_TASK
TASK TASK_ATTACHED_FILEINFO TASK_PK TASK_ATTACH_FILE_FK



Let's start by disabling constraint from the child table one by one and try to delete rows from the table TASK.

SQL> ALTER TABLE TASK_USER DISABLE CONSTRAINT TASKUSER_TASK;

Table altered.

SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKSCHEDULE_TASK) violated - child record found


SQL> ALTER TABLE task_schedule DISABLE CONSTRAINT TASKSCHEDULE_TASK;


Table altered.

SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASK_ATTACH_FILE_FK) violated - child record found


SQL> ALTER TABLE TASK_ATTACHED_FILEINFO DISABLE CONSTRAINT TASK_ATTACH_FILE_FK;

Table altered.

SQL> delete from task;

1289 rows deleted.

And now delete succeed. In order to get a list of parent tables that is referencing a child table issue,

SQL> SELECT c.table_name "Child Table", p.table_name "Parent Table",
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p JOIN user_constraints c
ON (p.constraint_name = c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND c.table_name = UPPER('&table_name');
Enter value for table_name: task
old 7: AND c.table_name = UPPER('&table_name')
new 7: AND c.table_name = UPPER('task')

Child Table Parent Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TASK TASK TASK_PK TASK_TASK_FK

Troubleshoot ORA-02292, ORA-02449 and ORA-02266

SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKUSER_TASK) violated - child record
found


SQL> drop table task;
drop table task
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys


SQL> truncate table task;
truncate table task
*
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys

To get list of child table with the constraint name that is referencing the TASK table issue,

SQL> set lines 140
SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p
JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND p.table_name = UPPER('&table_name');
Enter value for table_name: task
old 7: AND p.table_name = UPPER('&table_name')
new 7: AND p.table_name = UPPER('task')

Parent Table Child Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TASK TASK_REF TASK_PK TASKREF_TASK
TASK TASK TASK_PK TASK_TASK_FK
TASK TASK_SCHEDULE TASK_PK TASKSCHEDULE_TASK
TASK TASK_USER TASK_PK TASKUSER_TASK
TASK TASK_ATTACHED_FILEINFO TASK_PK TASK_ATTACH_FILE_FK



Let's start by disabling constraint from the child table one by one and try to delete rows from the table TASK.

SQL> ALTER TABLE TASK_USER DISABLE CONSTRAINT TASKUSER_TASK;

Table altered.

SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKSCHEDULE_TASK) violated - child record found


SQL> ALTER TABLE task_schedule DISABLE CONSTRAINT TASKSCHEDULE_TASK;


Table altered.

SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASK_ATTACH_FILE_FK) violated - child record found


SQL> ALTER TABLE TASK_ATTACHED_FILEINFO DISABLE CONSTRAINT TASK_ATTACH_FILE_FK;

Table altered.

SQL> delete from task;

1289 rows deleted.

And now delete succeed. In order to get a list of parent tables that is referencing a child table issue,

SQL> SELECT c.table_name "Child Table", p.table_name "Parent Table",
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p JOIN user_constraints c
ON (p.constraint_name = c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND c.table_name = UPPER('&table_name');
Enter value for table_name: task
old 7: AND c.table_name = UPPER('&table_name')
new 7: AND c.table_name = UPPER('task')

Child Table Parent Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TASK TASK TASK_PK TASK_TASK_FK

ORA-02297: cannot disable constraint -dependencies exist

Problem Description
Whenever you try to disable a constraint of a table it fails with error message ORA-02297: cannot disable constraint -dependencies exist as below.

SQL> alter table transaction disable constraint TRANSACTION_PK;
alter table transaction disable constraint TRANSACTION_PK
*
ERROR at line 1:
ORA-02297: cannot disable constraint (OMS.TRANSACTION_PK) - dependencies exist

Cause of the Problem
Disable constraint command fails as the table is parent table and it has foreign key that are dependent on this constraint.

Solution of the Problem
Two solutions exist for this problem.
1)Find foreign key constraints on the table and disable those foreign key constraints and then disable this table constraint.

Following query will check dependent table and the dependent constraint name. After that disable child first and then parent constraint.

SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
2 p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
3 FROM user_constraints p
4 JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
5 WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
6 AND c.constraint_type = 'R'
7 AND p.table_name = UPPER('&table_name');

Enter value for table_name: transaction
old 7: AND p.table_name = UPPER('&table_name')
new 7: AND p.table_name = UPPER('transaction')

Parent Table Child Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TRANSACTION USER_SALARY_RECORD TRANSACTION_PK SYS_C005564
TRANSACTION TRANSACTION_DETAIL TRANSACTION_PK TRNSCTN_DTL_TRNSCTN_FK

SQL> alter table USER_SALARY_RECORD disable constraint SYS_C005564;
Table altered.

SQL> alter table TRANSACTION_DETAIL disable constraint TRNSCTN_DTL_TRNSCTN_FK;
Table altered.

SQL> alter table transaction disable constraint TRANSACTION_PK;
Table altered.

2)Disable the constraint with cascade option.
SQL> alter table transaction disable constraint TRANSACTION_PK cascade;
Table altered.

ORA-02297: cannot disable constraint -dependencies exist

Problem Description
Whenever you try to disable a constraint of a table it fails with error message ORA-02297: cannot disable constraint -dependencies exist as below.

SQL> alter table transaction disable constraint TRANSACTION_PK;
alter table transaction disable constraint TRANSACTION_PK
*
ERROR at line 1:
ORA-02297: cannot disable constraint (OMS.TRANSACTION_PK) - dependencies exist

Cause of the Problem
Disable constraint command fails as the table is parent table and it has foreign key that are dependent on this constraint.

Solution of the Problem
Two solutions exist for this problem.
1)Find foreign key constraints on the table and disable those foreign key constraints and then disable this table constraint.

Following query will check dependent table and the dependent constraint name. After that disable child first and then parent constraint.

SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
2 p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
3 FROM user_constraints p
4 JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
5 WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
6 AND c.constraint_type = 'R'
7 AND p.table_name = UPPER('&table_name');

Enter value for table_name: transaction
old 7: AND p.table_name = UPPER('&table_name')
new 7: AND p.table_name = UPPER('transaction')

Parent Table Child Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TRANSACTION USER_SALARY_RECORD TRANSACTION_PK SYS_C005564
TRANSACTION TRANSACTION_DETAIL TRANSACTION_PK TRNSCTN_DTL_TRNSCTN_FK

SQL> alter table USER_SALARY_RECORD disable constraint SYS_C005564;
Table altered.

SQL> alter table TRANSACTION_DETAIL disable constraint TRNSCTN_DTL_TRNSCTN_FK;
Table altered.

SQL> alter table transaction disable constraint TRANSACTION_PK;
Table altered.

2)Disable the constraint with cascade option.
SQL> alter table transaction disable constraint TRANSACTION_PK cascade;
Table altered.

Tuesday, January 27, 2009

Use of bind variables in Sql*plus

Bind variables are variables that are declared inside PL/SQL or a variable in a SQL statement that must be replaced with a valid value.

In the SQL*Plus you can also create bind variable. If you create a bind variable in SQL*Plus, you can use the variable as you would a declared variable in your PL/SQL subprogram and then access the variable from SQL*Plus.

Creating Bind Variables
In order to create bind variable in Sql*plus you have to use variable keyword. To create a bind variable named bind_var as number datatype use,
variable bind_var NUMBER

Displaying Bind Variables

To display all bind variables in the session just use variable keyword.
SQL> variable
variable bind_var
datatype NUMBER

In order to display the value of a particular bind variable use,
SQL> print bind_var
BIND_VAR
----------

Using Bind Variables
To use bind variable first use colon(:) and then give variable name. In order to change values of bind variable you must enter in a PL/SQL block. To change bind variable of bind_var to 10 use,
begin
:bind_var:=10;
end;
/

Formatting SQL*Plus Reports

Sometimes you may find difficulties while displaying result on SQL*Plus like a column length displayed as a long length and thus does not result a fine output. In this post I will try to show how we can make good looking result displayed on SQL*Plus.

With COLUMN a column can be formatted. For string type data format is specified by A and then length. Like A7 means total column will span to 7 word length. For number data type the format is 99999 where number of 9s decide the length of digits of number data type.
Changing Column Headings
To change the heading of column a,b and c use commands like below. The vertical bar is used if you want to display the columns in a new line.

SQL> COLUMN A HEADING 'FIRST|COLUMN'
SQL> COLUMN C HEADING 'THIRD|COLUMN'
SQL> COLUMN B HEADING 'SECOND|COLUMN' FORMAT A7
SQL> select a,b,c from test1;


FIRST SECOND THIRD
COLUMN COLUMN COLUMN
---------- ------- ------
1 Hi Hi2

Setting the Underline Character
You see the underline character in the above output under heading is set to -. If you want to change it to = then use,

SQL> SET UNDERLINE =
SQL> /


FIRST SECOND THIRD
COLUMN COLUMN COLUMN
========== ======= ======
1 Hi Hi2

Default Display of columns
1)A NUMBER column's width equals the width of the heading or the width of the FORMAT plus one space for the sign, whichever is greater. If you do not explicitly use FORMAT, then the column's width will always be at least the value of SET NUMWIDTH.

BY default the NUMWIDTH is set to 10.

SQL> select 22/7 "This is the pi value" from dual;

This is the pi value
--------------------
3.14285714

SQL> show numwidth

numwidth 10

2)The default width of datatype columns is the width of the column in the database. The column width of a LONG, CLOB, NCLOB or XMLType defaults to the value of SET LONGCHUNKSIZE or SET LONG, whichever is the smaller.

If LONG is set to 80 and LONGCHUNKSIZE is set to 90 then for CLOB database only first 80 character will be shown on Sql*plus by default.

3)The default width and format of unformatted DATE columns in SQL*Plus is determined by the database NLS_DATE_FORMAT parameter. Otherwise, the default format width is A9.

Listing and Resetting Column Display Attributes
1)To list the current display attributes for a given column, use the COLUMN command followed by the column name.

COLUMN column_name

2)To list the current display attributes for all columns, enter just the COLUMN command.

COLUMN

3)To reset the display attributes for a column to their default values, use the CLEAR clause of the COLUMN command as below.

COLUMN column_name CLEAR

4)To reset all column display attributes to their default values enter,

CLEAR COLUMNS


Suppressing and Restoring Column Display Attributes

Though COLUMN command you once format a column and now you want to use the default display attributes for the column, but you don't want to remove the attributes you have defined through the COLUMN command. You can achieve that by,
COLUMN column_name OFF

To restore the attributes you defined through COLUMN, use the ON clause:

COLUMN column_name ON

Printing a Line of Characters after Wrapped Column Values
RECSEP determines when the line of characters is printed;
-you set RECSEP to EACH to print after every line,
-to WRAPPED to print after wrapped lines,
-The default setting of RECSEP is WRAPPED.

RECSEPCHAR sets the character printed in each line.

To print a line of dashes after each wrapped column value, enter:

SET RECSEP WRAPPED
SET RECSEPCHAR "-"


Clarifying Your Report with Spacing and Summary Lines
With the BREAK command you can suppress the duplicate values in a column specified in an ORDER BY clause. If two columns values come in the subsequent rows and you use BREAK on the column then only one will be shown as below.

Note that here I did not use order by clause. You should use order by on the column that you break. If you do not do this, breaks occur every time the column value changes.

SQL> select station_id,uname from users where station_id!=1 and rownum<=10;

STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
14 parvez
14 reza
14 voyeger
40 support
71 accounts5
71 bill5
71 chinmoy
71 crash5
71 mubeen5

10 rows selected.

SQL> break on station_id skip 2
SQL> select station_id,uname from users where station_id!=1 and rownum<=10;


STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat


14 parvez
reza
voyeger


40 support


71 accounts5
bill5
chinmoy
crash5
mubeen5



10 rows selected.

SQL> break on station_id skip page
SQL> /


STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat

STATION_ID UNAME
---------- --------------------------------------------------
14 parvez
reza
voyeger

STATION_ID UNAME
---------- --------------------------------------------------
40 support

STATION_ID UNAME
---------- --------------------------------------------------
71 accounts5
bill5
chinmoy
crash5
mubeen5

10 rows selected.

To insert n blank lines, use the BREAK command in the following form:

BREAK ON break_column SKIP n

where n can be 1 or 2 or ....
To skip a page, use the command in this form:

BREAK ON break_column SKIP PAGE


You may wish to insert blank lines or a blank page after every row.
To skip n lines after every row, use BREAK in the following form:

BREAK ON ROW SKIP n

To skip a page after every row, use

BREAK ON ROW SKIP PAGE


You can list your current break definition by entering the BREAK command with no clauses:

BREAK


You can remove the current break definition by entering the CLEAR command with the BREAKS clause:

CLEAR BREAKS

Use of bind variables in Sql*plus

Bind variables are variables that are declared inside PL/SQL or a variable in a SQL statement that must be replaced with a valid value.

In the SQL*Plus you can also create bind variable. If you create a bind variable in SQL*Plus, you can use the variable as you would a declared variable in your PL/SQL subprogram and then access the variable from SQL*Plus.

Creating Bind Variables
In order to create bind variable in Sql*plus you have to use variable keyword. To create a bind variable named bind_var as number datatype use,
variable bind_var NUMBER

Displaying Bind Variables

To display all bind variables in the session just use variable keyword.
SQL> variable
variable bind_var
datatype NUMBER

In order to display the value of a particular bind variable use,
SQL> print bind_var
BIND_VAR
----------

Using Bind Variables
To use bind variable first use colon(:) and then give variable name. In order to change values of bind variable you must enter in a PL/SQL block. To change bind variable of bind_var to 10 use,
begin
:bind_var:=10;
end;
/

Formatting SQL*Plus Reports

Sometimes you may find difficulties while displaying result on SQL*Plus like a column length displayed as a long length and thus does not result a fine output. In this post I will try to show how we can make good looking result displayed on SQL*Plus.

With COLUMN a column can be formatted. For string type data format is specified by A and then length. Like A7 means total column will span to 7 word length. For number data type the format is 99999 where number of 9s decide the length of digits of number data type.
Changing Column Headings
To change the heading of column a,b and c use commands like below. The vertical bar is used if you want to display the columns in a new line.

SQL> COLUMN A HEADING 'FIRST|COLUMN'
SQL> COLUMN C HEADING 'THIRD|COLUMN'
SQL> COLUMN B HEADING 'SECOND|COLUMN' FORMAT A7
SQL> select a,b,c from test1;


FIRST SECOND THIRD
COLUMN COLUMN COLUMN
---------- ------- ------
1 Hi Hi2

Setting the Underline Character
You see the underline character in the above output under heading is set to -. If you want to change it to = then use,

SQL> SET UNDERLINE =
SQL> /


FIRST SECOND THIRD
COLUMN COLUMN COLUMN
========== ======= ======
1 Hi Hi2

Default Display of columns
1)A NUMBER column's width equals the width of the heading or the width of the FORMAT plus one space for the sign, whichever is greater. If you do not explicitly use FORMAT, then the column's width will always be at least the value of SET NUMWIDTH.

BY default the NUMWIDTH is set to 10.

SQL> select 22/7 "This is the pi value" from dual;

This is the pi value
--------------------
3.14285714

SQL> show numwidth

numwidth 10

2)The default width of datatype columns is the width of the column in the database. The column width of a LONG, CLOB, NCLOB or XMLType defaults to the value of SET LONGCHUNKSIZE or SET LONG, whichever is the smaller.

If LONG is set to 80 and LONGCHUNKSIZE is set to 90 then for CLOB database only first 80 character will be shown on Sql*plus by default.

3)The default width and format of unformatted DATE columns in SQL*Plus is determined by the database NLS_DATE_FORMAT parameter. Otherwise, the default format width is A9.

Listing and Resetting Column Display Attributes
1)To list the current display attributes for a given column, use the COLUMN command followed by the column name.

COLUMN column_name

2)To list the current display attributes for all columns, enter just the COLUMN command.

COLUMN

3)To reset the display attributes for a column to their default values, use the CLEAR clause of the COLUMN command as below.

COLUMN column_name CLEAR

4)To reset all column display attributes to their default values enter,

CLEAR COLUMNS


Suppressing and Restoring Column Display Attributes

Though COLUMN command you once format a column and now you want to use the default display attributes for the column, but you don't want to remove the attributes you have defined through the COLUMN command. You can achieve that by,
COLUMN column_name OFF

To restore the attributes you defined through COLUMN, use the ON clause:

COLUMN column_name ON

Printing a Line of Characters after Wrapped Column Values
RECSEP determines when the line of characters is printed;
-you set RECSEP to EACH to print after every line,
-to WRAPPED to print after wrapped lines,
-The default setting of RECSEP is WRAPPED.

RECSEPCHAR sets the character printed in each line.

To print a line of dashes after each wrapped column value, enter:

SET RECSEP WRAPPED
SET RECSEPCHAR "-"


Clarifying Your Report with Spacing and Summary Lines
With the BREAK command you can suppress the duplicate values in a column specified in an ORDER BY clause. If two columns values come in the subsequent rows and you use BREAK on the column then only one will be shown as below.

Note that here I did not use order by clause. You should use order by on the column that you break. If you do not do this, breaks occur every time the column value changes.

SQL> select station_id,uname from users where station_id!=1 and rownum<=10;

STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
14 parvez
14 reza
14 voyeger
40 support
71 accounts5
71 bill5
71 chinmoy
71 crash5
71 mubeen5

10 rows selected.

SQL> break on station_id skip 2
SQL> select station_id,uname from users where station_id!=1 and rownum<=10;


STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat


14 parvez
reza
voyeger


40 support


71 accounts5
bill5
chinmoy
crash5
mubeen5



10 rows selected.

SQL> break on station_id skip page
SQL> /


STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat

STATION_ID UNAME
---------- --------------------------------------------------
14 parvez
reza
voyeger

STATION_ID UNAME
---------- --------------------------------------------------
40 support

STATION_ID UNAME
---------- --------------------------------------------------
71 accounts5
bill5
chinmoy
crash5
mubeen5

10 rows selected.

To insert n blank lines, use the BREAK command in the following form:

BREAK ON break_column SKIP n

where n can be 1 or 2 or ....
To skip a page, use the command in this form:

BREAK ON break_column SKIP PAGE


You may wish to insert blank lines or a blank page after every row.
To skip n lines after every row, use BREAK in the following form:

BREAK ON ROW SKIP n

To skip a page after every row, use

BREAK ON ROW SKIP PAGE


You can list your current break definition by entering the BREAK command with no clauses:

BREAK


You can remove the current break definition by entering the CLEAR command with the BREAKS clause:

CLEAR BREAKS