Database Journal
MS SQL Oracle DB2 Access MySQL PostgreSQL Sybase PHP SQL Etc SQL Scripts & Samples Links Database Forum

» Database Journal Home
» Database Articles
» Database Tutorials
MS SQL
Oracle
DB2
MS Access
MySQL
» RESOURCES
Database Tools
SQL Scripts & Samples
Links
» Database Forum
» Sitemap
Free Newsletters:
DatabaseDaily  
News Via RSS Feed


follow us on Twitter
Database Journal |DBA Support |SQLCourse |SQLCourse2
 

Featured Database Articles

Oracle

Posted Jan 4, 2007

Oracle RAC Administration - Part 12: RAC Essentials

By Tarry Singh

Brief intro

In the year 2006, we looked at the RAC database installations on Windows 2003 Server and Linux (Both RHEL and Centos 4.2). Red Hat will soon be releasing version 5 (based on 2.6.18 kernel) and we will test it against our new Oracle RAC. We will also take an opportunity to test Oracle RAC on Oracle Enterprise Linux.

Red Hat didn't faltered from its course and booked a good win in the year 2006 while Oracle’s Enterprise Linux still has to gain some momentum. We will continue to monitor the developments and test our systems on all possible platforms so you get to train with us to become a professional DBA.

RAC essentials examined

In our last article, we went through some RAC specific parameters and what we could not pick up in our last article will be taken up right away. So let’s get stared with the oifcfg.

OIFCFG (Oracle Interface Configuration):

We will try to cover topics concerning oifcfg where we will

1.  Configure NIC (Network Interfaces)

2.  Check the syntax for the oifcfg’s command-line tool

What you can achieve with OIFCFG is very useful to your RAC. You can activate and deactivate your NICs and allocate and deallocate these to components; when in dire need to replace or add newer NICs you can direct the components to use the new or altered NICs and retrieve component configuration information. Do you know where the information that you get, or that the OUI (Oracle Universal Installer) uses, concerning the status of your NICs comes from? Well you guessed it right; it’s our good ol’ OIFCFG command-line tool sweating it out in the background!

Configuring NICs (Network Interface Cards) with OIFCFG

A NIC is the heart of it all. When we are dealing with a typical RAC, I say typical RAC because someday on a Mainframe with robust virtualization, you may not need to rely on NICs, or with faster bandwidths (we already have 10Gbps) it might be less of a headache sync’ing across nodes than it is today. A typical NIC is uniquely identified with its name, address, subnet mask and type. The type of NIC indicates the purpose for which the network is configured. Moreover, in our RAC we have the following supported interfaces:

  • Public Network Card : This NIC is used for external communications, such as that with the Oracle NET Services and VIP (Virtual Internet Protocol) addresses
  • Private Network Card: Also know as High Speed Interconnects for the Cluster—a private interface used for the cluster interconnect to facilitate the inter-nodal or Cache Fusion (we will briefly cover it below) communication.

You can store your NIC as a global interface or a typical node-specific interface. Although the global interface is what is recommended (meaning all nodes have the same interface connected to the same subnet), it can also be stored as a node-specific interface; this is done when some nodes across the RAC use different sets of NICs and subnets. Should a NIC be configured as both a global and node-specific interface then the node-specific takes precedence over the global configuration, a typical NIC spec would look like this:

interface_name/subnet:interface_type

As an example, if the NIC is a High Speed Cluster Interconnect interface identified with a name eth1 with ip address of 10.0.0.25:

eth1/10.0.0.25:cluster_interconnect
 

Syntax for the OIFCFG Tool

Using the oifcfg -help command will roll out the entire online help for OIFCFG. The regular flags of OIFCFG commands are:

  • nodename: This would be your typical node name in your RAC cluster. (You can get the output of your nodes with the command olsnodes.
  • if_name : Name of the NIC card to be configured.
  • subnet : Subnet address of the NIC.
  • if_type :Type of interface: public or private (cluster_interconnect).

You can run the following command using the OIFCFG tool to list the NICs and the subnets of all the NICs available to the local node by running the iflist keyword, as this example illustrates:

oifcfg iflist
eth0     172.22.202.255
eth1     10.0.0.255

It is also possible to extract specific OIFCFG information with a getif command; see the syntax illustrated below:

oifcfg getif [ [-global | -node nodename] [-if if_name[/subnet]] [-type if_type] ]

If you wish to store a new interface then use the setif keyword. For example, to store the interface eth1, with the subnet 172.16.33.0, as a global interface for high speed interconnects, then you would use this command:

oifcfg setif -global eht1/172.16.22.0:cluster_interconnect

For an HIS existing in a 2-node RAC, you could typically create a nds0 interface using the following commands, given that 172.16.22.0 is the subnet addresses for the high speed interconnect on racnode1 and racnode2:

oifcfg setif -global nds0/172.16.33.0:cluster_interconnect

Use the OIFCFG delif command to delete the stored configuration for global or node-specific interfaces. You would do so by providing the interface name, subnet (although optional), on the command line. Also, providing the -node or -global options, the delif keyword will delete either the given interface or all of the global and node-specific interfaces on all of the nodes in the cluster. Take this example; here the command deletes the global interface eth0 for subnet 172.22.22.0:

oifcfg delif -global eth0/172.22.22.0

Whereas this deletes all of the global NICs :

oifcfg delif -global

Cache Fusion : A quick look

Manual says “A diskless cache coherency mechanism in Real Application Clusters that provides copies of blocks directly from a holding instance's memory cache to a requesting instance's memory cache.”

Cache Fusion is the cynosure of the whole RAC. It enables the shipping of blocks between the SGAs of nodes in a cluster, through the high speed interconnects, thus avoiding the passage via the disk and getting reread into the buffer cache of another instance. When that happens, a lock resource is assigned to that block ensuring that other nodes are aware of the fact that that block is in use. If node2 requests a copy of that block, then it is shipped via the cluster interconnect, directly to the SGA of the requesting node. If the block has changed in memory then a CR copy is shipped to the requesting node. This means that the overhead of having to write to the disk (as we all know so well as DBAs) is evaded, thus saving on the cost by reducing the extra I/O to sync the buffer caches across the RAC. Thus it is of critical importance that you set up your interconnects properly.

Conclusion:

In the next article, we will continue to our RAC DBA administration sessions. We will inspect what the cache coherency is and what the essential components are for a robust RAC operation.

» See All Articles by Columnist Tarry Singh



Oracle Archives

Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 




Latest Forum Threads
Oracle Forum
Topic By Replies Updated
Oracle Data Mining: Classification jan.hasller 0 July 5th, 07:19 AM
Find duplicates - Unique IDs Lava 5 July 2nd, 08:30 AM
no matching unique or primary key rcanter 1 April 25th, 12:32 PM
Update values of one table based on condition of values in other table using Trigger Gladiator 3 February 29th, 06:01 PM


















Thanks for your registration, follow us on our social networks to keep up-to-date