Oracle RAC Administration - Part 12: RAC Essentials
January 4, 2007
In the year 2006, we looked at the RAC database installations on Windows 2003 Server and Linux (Both RHEL and Centos 4.2). Red Hat will soon be releasing version 5 (based on 2.6.18 kernel) and we will test it against our new Oracle RAC. We will also take an opportunity to test Oracle RAC on Oracle Enterprise Linux.
Red Hat didn't faltered from its course and booked a good win in the year 2006 while Oracles Enterprise Linux still has to gain some momentum. We will continue to monitor the developments and test our systems on all possible platforms so you get to train with us to become a professional DBA.
RAC essentials examined
In our last article, we went through some RAC specific parameters and what we could not pick up in our last article will be taken up right away. So lets get stared with the oifcfg.
OIFCFG (Oracle Interface Configuration):
We will try to cover topics concerning oifcfg where we will
1. Configure NIC (Network Interfaces)
2. Check the syntax for the oifcfgs command-line tool
What you can achieve with OIFCFG is very useful to your RAC. You can activate and deactivate your NICs and allocate and deallocate these to components; when in dire need to replace or add newer NICs you can direct the components to use the new or altered NICs and retrieve component configuration information. Do you know where the information that you get, or that the OUI (Oracle Universal Installer) uses, concerning the status of your NICs comes from? Well you guessed it right; its our good ol OIFCFG command-line tool sweating it out in the background!
Configuring NICs (Network Interface Cards) with OIFCFG
A NIC is the heart of it all. When we are dealing with a typical RAC, I say typical RAC because someday on a Mainframe with robust virtualization, you may not need to rely on NICs, or with faster bandwidths (we already have 10Gbps) it might be less of a headache syncing across nodes than it is today. A typical NIC is uniquely identified with its name, address, subnet mask and type. The type of NIC indicates the purpose for which the network is configured. Moreover, in our RAC we have the following supported interfaces:
You can store your NIC as a global interface or a typical node-specific interface. Although the global interface is what is recommended (meaning all nodes have the same interface connected to the same subnet), it can also be stored as a node-specific interface; this is done when some nodes across the RAC use different sets of NICs and subnets. Should a NIC be configured as both a global and node-specific interface then the node-specific takes precedence over the global configuration, a typical NIC spec would look like this:
As an example, if the NIC is a High Speed Cluster Interconnect interface identified with a name eth1 with ip address of 10.0.0.25:
Syntax for the OIFCFG Tool
You can run the following command using the OIFCFG tool to list the NICs and
the subnets of all the NICs available to the local node by running the
oifcfg iflist eth0 172.22.202.255 eth1 10.0.0.255
It is also possible to extract specific OIFCFG information with a
oifcfg getif [ [-global | -node nodename] [-if if_name[/subnet]] [-type if_type] ]
If you wish to store a new interface then use the
oifcfg setif -global eht1/172.16.22.0:cluster_interconnect
For an HIS existing in a 2-node RAC, you could typically create a nds0 interface using the following commands, given that 172.16.22.0 is the subnet addresses for the high speed interconnect on racnode1 and racnode2:
oifcfg setif -global nds0/172.16.33.0:cluster_interconnect
Use the OIFCFG
oifcfg delif -global eth0/172.22.22.0
Whereas this deletes all of the global NICs :
oifcfg delif -global
Cache Fusion : A quick look
Manual says A diskless cache coherency mechanism in Real Application Clusters that provides copies of blocks directly from a holding instance's memory cache to a requesting instance's memory cache.
Cache Fusion is the cynosure of the whole RAC. It enables the shipping of blocks between the SGAs of nodes in a cluster, through the high speed interconnects, thus avoiding the passage via the disk and getting reread into the buffer cache of another instance. When that happens, a lock resource is assigned to that block ensuring that other nodes are aware of the fact that that block is in use. If node2 requests a copy of that block, then it is shipped via the cluster interconnect, directly to the SGA of the requesting node. If the block has changed in memory then a CR copy is shipped to the requesting node. This means that the overhead of having to write to the disk (as we all know so well as DBAs) is evaded, thus saving on the cost by reducing the extra I/O to sync the buffer caches across the RAC. Thus it is of critical importance that you set up your interconnects properly.
In the next article, we will continue to our RAC DBA administration sessions. We will inspect what the cache coherency is and what the essential components are for a robust RAC operation.