Thursday, January 8, 2009

Configure Shared Storage in Oracle RAC installation

An Oracle RAC database is logically or physically shared everything database. All
datafiles, control files, PFILEs, and redo log files in Oracle RAC environments must
reside on shared disks. This is so that all of the cluster database instances
can access them.

IN that shared disk you can use the following file storage options for Oracle RAC database.

1)ASM: Which stands for Automatic Storage Management. And oracle recommends ASM.

2)Oracle Cluster File System (OCFS):
OCFS is available for Linux and Windows
platforms, or a third-party cluster file system that is certified for Oracle RAC.

3)A network file system: This is not supported on AIX, POWER, or on IBM zSeries-based Linux.

4)Raw devices

In this post I will show how to configure RAW device to install oracle RAC.

Before discussing it is good to know the difference between single instance database and RAC instance database.

Oracle RAC databases differ architecturally from Oracle RAC single-instance Oracle
databases in that each Oracle RAC database instance also has,
At least one additional thread of redo for each instance
An instance-specific undo tablespace

To configure RAW storage device as a shared storage in RAC database first you have to think about clusterware part. Oracle clusterware has two components OCR and voting disk.

Before you install Oracle Clusterware, you will need to configure 5 raw partitions,
each 256 MB in size, one storing the Oracle Cluster Registry (OCR), another one for duplicate OCR file on a different disk, referred to as the OCR mirror, and three voting disks.

And then If you plan to use raw devices for storing the database files, you will need to create additional raw partitions for each tablespace, online redo log file, control file, SPFILE and password file.

Below is the procedure of how to configure RAW device.

Step 1:
Be sure that you have storage box purchased that is shared on both nodes.
Now to see available shared disk on your system as a root user on racnode-1 use,
# /sbin/fdisk -l
It gave me output like,
.
.
.
Disk /dev/sdb: 161.0 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 161.0 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 161.0 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 161.0 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn't contain a valid partition table

Disk /dev/sdf: 91.2 GB, 91268055040 bytes
255 heads, 63 sectors/track, 11096 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdf doesn't contain a valid partition table

Since I have sdb, sdc, sdd, sdf, sde disks on my storage box so all five are shown.

Step 2:
Now as a root user on racnode-1 create two raw partitions 256 MB in size for the OCR and its mirror, and three partitions 256 MB in size for the Oracle Clusterware voting disks. To store first OCR partition I use sdb for this. So, I use
# /sbin/fdisk /dev/sdb
After issuing it,

Use the p command to list the partition table of the device.

Use the n command to create a partition.

After you have created the required partitions on this device, use the w command to write the modified partition table to the device.

Here is the example of creating my first 256 raw partition of sdb disk.
The bolded one is input by me.
[root@racnode-1 ~]# /sbin/fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


The number of cylinders for this disk is set to 19581.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/sdb: 161.0 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-19581, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-19581, default 19581): +256M

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@racnode-1 ~]#
Similarly configure other 256MB partition from rest of the disk. I created raw 256MB partition from /dev/sdc for OCR mirror. From the three disks /dev/sdd, /dev/sde and /dev/sdf create each 256MB partition for voting disk.

That is,
#fdisk /dev/sdc

#fdisk /dev/sdd

#fdisk /dev/sde

#fdisk /dev/sdf


After you configure your all 5 disks on your system your fdisk -l output will look like below.
[root@racnode-1 ~]# fdisk -l

Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 1275 10241406 83 Linux
/dev/sda2 1276 3723 19663560 83 Linux
/dev/sda3 3724 6145 19454715 83 Linux
/dev/sda4 6146 9729 28788480 5 Extended
/dev/sda5 6146 8440 18434556 83 Linux
/dev/sda6 8441 9205 6144831 83 Linux
/dev/sda7 9206 9727 4192933+ 82 Linux swap / Solaris

Disk /dev/sdb: 161.0 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 32 257008+ 83 Linux

Disk /dev/sdc: 161.0 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 32 257008+ 83 Linux

Disk /dev/sdd: 161.0 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 32 257008+ 83 Linux

Disk /dev/sde: 161.0 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sde1 1 32 257008+ 83 Linux

Disk /dev/sdf: 91.2 GB, 91268055040 bytes
255 heads, 63 sectors/track, 11096 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdf1 1 32 257008+ 83 Linux

Step 3:
As the root user on racnode-1, edit the /etc/sysconfig/rawdevices file and add the mappings for the raw devices used by Oracle Clusterware.
[root@racnode-1 ~]# vi /etc/sysconfig/rawdevices
#OCR Devices
/dev/sdb /dev/sdb1
/dev/sdc /dev/sdc1
#Voting Disk Devices
/dev/sdd /dev/sdd1
/dev/sde /dev/sde1
/dev/sdf /dev/sdf1


Step 4: If you are using RHL4 then as the root user, on the node racnode1, enable the raw devices so that the mappings become effective at the operating system level using,
[root@racnode-1 ~]# service /etc/sysconfig/rawdevices start

If you are using RHL5 then you will get
[root@racnode-1 ~]# service rawdevices start
rawdevices: unrecognized service


Step 5:
In the racnode-2 inform the OS of partition table changes. This is done by partprobe program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table.

#/sbin/partprobe /dev/sdb
/sbin/partprobe /dev/sdc
/sbin/partprobe /dev/sdd
/sbin/partprobe /dev/sde
/sbin/partprobe /dev/sdf

This forces the operating system on the other node in the cluster to refresh its picture of the shared disk partitions.

Step 6:

On RHL4 for each node make an entry in /etc/sysconfig/rawdevices and use following command,
service rawdevices start

Step 7:
As the root user, on each node in the cluster, enter commands similar to the following to set the owner, group, and permissions on the newly created device files:

chown root:oinstall /dev/raw/raw1
chown root:oinstall /dev/raw/raw2
chown oracle:oinstall /dev/raw/raw3
chown oracle:oinstall /dev/raw/raw4
chown oracle:oinstall /dev/raw/raw5
chmod 640 /dev/raw/raw1
chmod 640 /dev/raw/raw2
chmod 640 /dev/raw/raw3
chmod 640 /dev/raw/raw4
chmod 640 /dev/raw/raw5

No comments:

Post a Comment