About Me

Database and GIS Consultant.

Sunday, March 2, 2014

Create and Configure Shared Disk for Oracle 11g Grid Infrastructure

I will be covering 2 sections (topics) in my below document:-

Section1: Copy 11g Grid Infrastructure zip file to node1, unzip and install the cvuqdisk package in both nodes.

Purpose: I will be installing 11g Grid Infrastructure by placing its installable files in node 1 as staging area and install it from there. Also, among the installable files, cvuqdisk package is part of it, which I will be using it to install in both nodes. This cvuqdisk package is necessary to make the Cluster Verification Utility (CVU) function properly.
 

Section 2: Create and Configure Shared Disk for Oracle 11g Grid Infrastructure.

Purpose: Oracle ASM and RAC database files will reside in this Shared Disk so all the cluster instances can access them.

Pre-Requisites:-

a. Stood up 2 Nodes of 64 bit Oracle Linux Virtual Machines in ESXi 5.5

I downloaded (b) and installed (
c & d) in my Win 7 laptop where vSphere Client 5.5 is installed:-
b. Grid Infrastructure File (p10404530_112030_Linux-x86-64_3of7.zip) from Oracle.
c. Installed VMware vSphere command line Interface "VMware-vSphere-CLI-5.5.0-1384587.exe".
d.
Installed Win SCP "winscp440setup.exe" software (Windows Secure Copy tool from winscp.net.
 

Section 1: Copy 11g Grid Infrastructure file and install cvuqdisk package.

Copy the 11g Grid Infrastructure file from my laptop to node1.babulab VM using Win SCP.

1) Make sure both nodes are up. In node1.babulab, login as "oracle" and create a folder "/home/oracle/sw"
 

2) Start Win SCP and connect to node1.babulab as "oracle"



Navigate to the source and destination folders, copy (drag and drop) the grid infrastructure zip file from laptop to node1.
 



After copy is complete, in the menu, click session and disconnect.

3) Log into node1.babulab as "oracle", navigate to folder "/home/oracle/sw", extract the zip file and delete it:-

$ cd /home/oracle/sw
$ unzip p10404530_112030_Linux-x86-64_3of7.zip
$ rm p10404530_112030_Linux-x86-64_3of7.zip

As "root" user, install the cvuqdisk package for Linux:-

$ cd grid/rpm
$ ls -l cvuqdisk*
$ su root
# rpm -Uvh cvuqdisk-1.0.9-1.rpm

In node2.babulab, log in as "oracle" and create a folder "sw" in "/home/oracle"

From node1.babulab as "oracle" user, copy "cvuqdisk-1.0.9-1.rpm" to node2.babulab 

$ scp cvuqdisk-1.0.9-1.rpm oracle@node2.babulab:/home/oracle/sw 

In node2.babulab, log in as "root" and install the cvuqdisk package for Linux

$ cd /home/oracle/sw
$ su root
# rpm -Uvh cvuqdisk-1.0.9-1.rpm

Section 2: Create and Configure Shared Disk:

Create Shared Disk

4) In EXSi datastore, create a folder called "shared" for shared disk location




5) Shutdown both virtual machines (nodes)

6) In the command prompt, navigate to where VMware CLI is installed (my laptop) and run below commands to create the shared disk:-

C:\>cd C:\Program Files (x86)\VMware\VMware vSphere CLI\bin
C:\Program Files (x86)\VMware\VMware vSphere CLI\bin>vmkfstools.pl -server 192.168.1.70 -c 10G -d eagerzeroedthick -a lsilogic /vmfs/volumes/datastore1/shared/shared_disk.vmdk
Enter username: root
Enter password:
Attempting to create virtual disk [datastore1] shared/shared_disk.vmdk
Successfully created virtual disk [datastore1] shared/shared_disk.vmdk


Add the shared disk

7) Now add the shared disk to node1.babulab.
"Edit Settings...."


8) Click the "Add...."


9) Select "Hard Disk.".


10) Select "Use an existing virtual disk."


11) Click Browse and navigate to the newly crated "shared_disk.vmdk" file



12) Click next.


13) Under "Virtual Device Node," select "SCSI (1:0)". This will create a new disk controller. Select "Independent" and "Persistent.".


14) Click Finish.


15) Important: On the following screen DO NOT click "OK" yet. Select the new SCSI controller and select "Physical". Now, go ahead and click "OK" to have the change take effect.


16) Repeat the above steps (7 - 15) for node2.babulab 

17) Start up both nodes

18) In node1.babulab, log in as root and query to check if the newly added shared disk is present

[root@node1 ~]# ls -l /dev/sd*

You will see the newly added shared drive as "/dev/sdb"

 
19)  Perform below to configure the shared disk:-

[root@node1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xd79b0867.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305):

Using default value 1305

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@node1 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,  0 Feb 19 22:58 /dev/sda
brw-rw---- 1 root disk 8,  1 Feb 19 22:58 /dev/sda1
brw-rw---- 1 root disk 8,  2 Feb 19 22:58 /dev/sda2
brw-rw---- 1 root disk 8, 16 Feb 19 23:09 /dev/sdb
brw-rw---- 1 root disk 8, 17 Feb 19 23:09 /dev/sdb1

[root@node1 ~]# oracleasm createdisk SHARED_DISK1 /dev/sdb1
Writing disk header: done
Instantiating disk: done

[root@node1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@node1 ~]# oracleasm listdisks
SHARED_DISK1


Run below in node2.babulab as "root" user

# oracleasm scandisks
# oracleasm listdisks

 

2 comments:

Ali said...

Hello Babu,

Section 1 is where I am running into problem, my desktop on which I have the vsphere client installed cannot access the node1, I have tried pinging to no avail. Your help would be much appreciated.

babumani said...

Hi Ali, the ping test I have shown here from one node to another and vice-versa is to make sure the network/RAC interconnect component is configured correctly. Why would you want to ping a node from vsphere client? Pinging from vsphere to RAC nodes may not succeed if they both are in different network. If they are in the same network, check if pinging is not blocked by your firewall or gateway.