Oracle Database 11gR2: Installing Grid Infrastructure

Synopsis.
Oracle Database 11g Release 2 makes it much simpler to configure and
incorporate many of the grid computing features that were only available in a
Real Application Clusters (RAC) clustered database environment in previous
releases for a single-instance Oracle database. This article – the first in
this series – will demonstrate how to install and configure a new Oracle 11g
Release 2 (11gR2) Grid Infrastructure home as the basis for the majority of
these grid computing features.

It’s
been a few months since I summarized the incredible array of new features that
Oracle has introduced as part of Oracle Database Release 11gR2, and in that
span of time, I’ve been experimenting with those features as I’ve built a new
infrastructure for experimentation. Among the most intriguing new features is
the consolidation of Automatic Storage
Management
(ASM) with Oracle
Clusterware
(OC) into a pragmatic and sensible arrangement called
the Oracle Grid Infrastructure (GI). As
I’ll demonstrate in this article, the venerable Oracle Universal Installer (OUI) utility gets a welcome
update in this release, but first I’ll need to perform quite a bit of system
administration work before we can invoke it and explore its new features.

First … A Word About The (Computing) Environment.
I’ve made some long-desired changes to my home office’s personal computing
infrastructure so that I can manage my workload effectively and efficiently
with my favorite virtualization environment, VMWare:

  • I’ve upgraded to Oracle Enterprise Linux (OEL) 5
    Update 2 (kernel 2.6.18-92.el5) for my base computing platform, a home-grown
    gaming server with 4GB of memory running an AMD Opteron dual-core processor.
  • I’ve also finally moved up to VMWare Workstation Version
    7.0.0 for all my VMWare endeavors, and though I still occasionally long for the
    freedom of VMWare Server 2.0a (as in free!), I’ve found that Workstation is
    just as stable and that it works extremely well with OEL as both its host and
    guest OS.

Setting Up For Oracle 11gR2 Grid Infrastructure

I’m
going to implement my 11gR2 Grid Infrastructure via a series of Oracle best
practices that I’ve encountered over the years and have gleaned through a
thorough reading of Oracle’s technical documentation. I’ll be using raw disk
partitions for configuring all of the ASM disks that will eventually comprise
the various ASM disk groups needed for my demonstrations.

Creating the Required Raw Partitions. The
Oracle 11gR2 Grid Infrastructure
leverages ASM to store multiple copies of the Oracle
Clusterware Registry
(OCR) file, multiple Voting Disks, and of course the ASM disk groups disks
themselves. Since the maximum number of logical partitions that can be created
within any one extended partition is 12, I’ve created two VMWare virtual disks
sized at 18.5 GB and 11.0 GB, respectively. Here’s the output from the terminal
session during which I used the Linux fdisk command to create the
remaining logical partitions:


[root@11gR2Base ~]# fdisk -l /dev/sde

Disk /dev/sde: 19.3 GB, 19327352832 bytes
255 heads, 63 sectors/track, 2349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sde1 1 2349 18868311 5 Extended
/dev/sde5 1 281 2257069+ 83 Linux
/dev/sde6 282 562 2257101 83 Linux
/dev/sde7 563 843 2257101 83 Linux
/dev/sde8 844 1124 2257101 83 Linux
/dev/sde9 1125 1405 2257101 83 Linux
/dev/sde10 1406 1686 2257101 83 Linux
/dev/sde11 1687 1967 2257101 83 Linux
/dev/sde12 1968 2248 2257101 83 Linux

[root@11gR2Base ~]# fdisk -l /dev/sdf

Disk /dev/sdf: 12.0 GB, 12079595520 bytes
255 heads, 63 sectors/track, 1468 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdf1 1 1468 11791678+ 5 Extended
/dev/sdf5 1 281 2257069+ 83 Linux
/dev/sdf6 282 562 2257101 83 Linux
/dev/sdf7 563 843 2257101 83 Linux
/dev/sdf8 844 1124 2257101 83 Linux
/dev/sdf9 1125 1405 2257101 83 Linux

Assigning Raw Partitions to Block Device Endpoints.
Oracle has recommended for some time that block devices are a much better
choice for the ASM file system, especially since I’ve occasionally heard rumors
that support for traditional raw devices allocated through the /etc/sysconfig/rawdevices
configuration file may be reduced or disappear in the future.

For
this and all future Oracle 11gR2 features demonstrations, I’ve configured a
special service, losetup, that will construct,
configure and allocate virtual block devices during server startup. For the losetup
script to work properly, however, note that I also needed to increase the
default number of loopback devices from eight to 16; I did this by adding the
following line to the /etc/modprobe.conf system configuration file, and
then rebooting the server to make sure it took effect:


options loop max_loop=16

Listing
1.1
shows the losetup script I used to complete
the assignment of raw partitions to virtual block devices. After I copied the
script to file /etc/init.d/losetup,
I then registered the new service (as the root user) via chkconfig:


#> chmod 775 /etc/init.d/losetup
#> chkconfig losetup –add
#> chkconfig losetup on
#> chkconfig losetup –list

After
rebooting the server, here’s the result of implementing the losetup
script – the successful allocation of block devices as shown below:


[root@11gR2Base ~]# ls -la /dev/xv*
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdb -> /dev/loop1
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdc -> /dev/loop2
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdd -> /dev/loop3
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvde -> /dev/loop4
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdf -> /dev/loop5
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdg -> /dev/loop6
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdh -> /dev/loop7
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdi -> /dev/loop8
lrwxrwxrwx 1 root root 10 Feb 7 20:49 /dev/xvdj -> /dev/loop9
lrwxrwxrwx 1 root root 11 Feb 7 20:49 /dev/xvdk -> /dev/loop10
lrwxrwxrwx 1 root root 11 Feb 7 20:49 /dev/xvdl -> /dev/loop11
lrwxrwxrwx 1 root root 11 Feb 7 20:49 /dev/xvdm -> /dev/loop12
lrwxrwxrwx 1 root root 11 Feb 7 20:49 /dev/xvdn -> /dev/loop13

Configuring and Implementing ASMLIB

To
keep my ASM configuration simple to manage, I’ll also use the Oracle ASM disk
management drivers that ASMLIB provides to “stamp” each
target mount point before actually creating ASM disks and disk groups. First,
I’ll confirm that the oracleasm drivers appropriate to my OS kernel
version have indeed been installed:


[root@11gR2Base ~]# rpm -qa | grep oracleasm
oracleasm-2.6.18-92.el5xen-2.0.4-1.el5
oracleasm-2.6.18-92.el5-2.0.4-1.el5
oracleasm-2.6.18-92.el5debug-2.0.4-1.el5
oracleasm-support-2.0.4-1.el5

Excellent!
My system administrator took care of this when she installed Oracle Enterprise
Linux 5 Update2; otherwise, I’d be forced to remind her to download the
appropriate ORACLEASM drivers and then install them on my server. However, it was necessary to make sure that
the connection to the appropriate oracleasm RPMs was available, and
that took a little extra manipulation as shown in the output below:


[root@11gR2Base ~]# /usr/lib/oracleasm/oracleasm_debug_link 2.6.18-92.el5 $(uname -r)
oracleasm_debug_link: Target exists
[root@11gR2Base ~]# ls -l /lib/modules/$(uname -r)/kernel/drivers/addon/oracleasm
total 576
-rw-r–r– 1 root root 579514 May 23 2008 oracleasm.ko

[root@11gR2Base ~]# /etc/init.d/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets (‘[]’). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Loading module “oracleasm”: [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]

[root@11gR2Base ~]# /etc/init.d/oracleasm status
Checking if ASM is loaded: [ OK ]
Checking if /dev/oracleasm is mounted: [ OK ]

“Stamping” Candidate Disks With ASMLIB.
Now that ASMLIB is configured properly, it’s time to apply ASMLIB “stamps” to
each virtual device via the createdisk command as shown below. This makes it
much simpler to configure and manage ASM disks without having to use complex
mount point naming conventions:


[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK1 /dev/xvdb
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK2 /dev/xvdc
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK3 /dev/xvdd
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK4 /dev/xvde
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK5 /dev/xvdf
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK6 /dev/xvdg
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK7 /dev/xvdh
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK8 /dev/xvdi
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK1 /dev/xvdj
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK2 /dev/xvdk
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK3 /dev/xvdl
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK4 /dev/xvdm
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK5 /dev/xvdn

Finally,
I’ll invoke ASMLIB’s listdisks command to confirm that
all disks have been correctly “stamped” and are now ready for use in concert
with my upcoming Grid Infrastructure installation:


[root@11gR2Base ~]# /etc/init.d/oracleasm listdisks
ACFDISK1
ACFDISK2
ACFDISK3
ACFDISK4
ACFDISK5
ASMDISK1
ASMDISK2
ASMDISK3
ASMDISK4
ASMDISK6
ASMDISK7
ASMDISK8

Jim Czuprynski
Jim Czuprynski
Jim Czuprynski has accumulated over 30 years of experience during his information technology career. He has filled diverse roles at several Fortune 1000 companies in those three decades - mainframe programmer, applications developer, business analyst, and project manager - before becoming an Oracle database administrator in 2001. He currently holds OCP certification for Oracle 9i, 10g and 11g. Jim teaches the core Oracle University database administration courses on behalf of Oracle and its Education Partners throughout the United States and Canada, instructing several hundred Oracle DBAs since 2005. He was selected as Oracle Education Partner Instructor of the Year in 2009. Jim resides in Bartlett, Illinois, USA with his wife Ruth, whose career as a project manager and software quality assurance manager for a multinational insurance company makes for interesting marital discussions. He enjoys cross-country skiing, biking, bird watching, and writing about his life experiences in the field of information technology.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles