Oracle RAC: How shared storage works on VMware – Part 3

Brief intro

In our previous article,
we looked at the clustering possibilities on the ESX Server. In this article,
we will take a detailed look at the various possibilities for building clusters
across several physical ESX hosts. We will also look at the clustering possibilities across ESX hosts
and physical machines.

Clustering Oracle RAC Virtual Machines across ESX hosts

Here we will create and customize the
first node, create and customize the second node, and add shared disks and
network configurations. The first step of creating the machine is very much
similar to the last article, see diagram
1
of creating a VM.

  • You will create a VM, add two
    network adapters and local storage for booting and swapping (I personally add
    two local disks on SCSI0, my first controller. One for boot, I just customized
    it and put everything under / ; this is a typical 10G disk and a 4G disk for
    swap) and then go ahead and install the operating system (Windows, Linux or
    Unix-Solaris).
  • The second node is created much
    the same way as the first one.
  • Adding shared storage: Here we add
    remote disks on a new SCSI controller, it is normally SCSI1. Then we go about
    configuring the IP addresses. You can do that in advance as well. Just watch
    out for the Solaris 10.3 where my NICs vanished after I added shared storage.
    In this case, it might be wise to configure the ip addresses at the very end.
    The disks must point to SAN LUNs. RDM or Raw Device Mapping must be in physical
    compatibility mode.

For the sake of repetition, we will
quickly go through the steps of creating the first node:

  • I normally make resource pools and
    set all the shares (CPU, Memory) to normal. Please refer to the ESX
    administration manual for more details on resource pools. It is an excellent
    means to allocate VMs to resource pools. If you don’t want resource pools, you
    can just select your ESX host and start creating your VM on it.
  • Datastore: Choose for a local
    datastore. You can create disks in advance and then allocate the *.vmdks to the
    newly created VM. You can also create them via the “create VM wizard”.
  • Guest Operating System (OS): I
    normally have a separate LUN reserved for VM templates and ISO disks. Here I
    select the OS I need to install. If you already have Virtual Center then you can just create VMs from ready-baked VM templates. Should
    you not have the templates, just go ahead and install the OS.
  • CPUs: Choose as many vCPUs as you
    need.
  • Memory: Depends on how much you
    have on your ESX server. I prefer to keep them to 2G per VM but you can also
    try on 1G per VM. Just make sure to tune your Oracle Memory management
    manually.
  • Network Interface Cards (NICs):
    Create two of them.
  • Click OK and create your VM.
  • After creation, do the necessary
    post creation tasks such as configuring your floppy, CD drive etc.
  • Installing Guest OS: We pointed
    our Cdrom to the ISO files of our OS. Now its time to install it. Select the VM
    and in the CD drive setting, check the boxes to be “connected”. Now, power on
    the Virtual Machine, and continue with the installation. For Linux we have
    detailed our installation of RHEL as well as Oracle Enterprise Linux. Please
    follow those steps carefully as documented and you will have a successful
    installation.

Now let’s see what we need to do to
create the second node. The best way is to clone the machine you just created. (Why
go to all the trouble to make machines.) If you are to test an 8 Node RAC on
your ESX, you might as well go ahead and do it smartly. That is why VMware ESX
Server is such an exciting platform to work on.

  • Shut down node 1 and power it off.
  • In your VI client, select the VM01
    (we will call it VM01 and VM02 for the sake of simplicity) and right click to clone
    it.
  • Name/Location: Choose VM02 and the
    Datastore where both nodes should be stored as per your capacity planning.
  • Host or Cluster: Choose the second
    ESX host (let’s call it ESXRAC02) for cluster setup.
  • Resource pool: Select a resource
    pool, if you have created one or just add the machine to that new ESX host,
    ESXRAC02.
  • Datastore: Choose local datastore.
    In any case, make sure that you have enough space to accommodate the new VM.
  • Do not customize
  • Select OK to create the clone

Note: Many users may not be able to clone
or may not find cloning the right way to go. I have a tip for those users. Did
you know that you could use the free VMware Converter to clone your VM on your
ESX or VMware Server/Workstation? Try it. In another article, we will throw in
some screen shots.

Adding Shared Storage

To add shared storage please use the
following steps:

  • Select the newly created VM01 or
    VM02
  • Click Add hard disk and then click
    next
  • For disk space choose Mapped SAN
    LUN
  • In the LUN choices, pick up an
    unformatted LUN (Make sure you are doing all this as your SAN admin) and click
    next
  • When selecting the datastore,
    select local datastore and click next. (The RDM mapping is also stored in this
    location).
  • Select “physical compatibility
    mode” and click next (You will notice in the edit settings tab that a new SCSI
    controller SCSI(1: 0) is created when you create your Virtual Hard Disk). Add
    all the disks to it. Ocr.vmdk, asmfile.vmdk, votingdisk,vmdk, asm01.vmdk,
    asm02.vmdk etc.
  • Select new virtual device node and
    choose the SCSI1:0, SCSI1:1, SCSI1:2, etc. for all your RAC files.
  • Click Finish to create all the
    disks.
  • Select this new SCSI1 controller
    and change the controller type to LsiLogic
  • In the same panel, change the SCSI
    Bus Sharing to Physical (as in the print screen below).
  • Click OK (As mentioned in my
    previous article, when starting my VM, I once got a few dialog boxes asking me
    to change the type to LsiLogic. I acknowledged all the messages.)
  • Do the same steps for VM02 node.
    All you have to do in this case is to point to the “existing vmdk” created from
    our above-mentioned steps. Allocate the SCSI1:0, SCSI1:1 etc. to the disks in
    the same manner as you did with the VM01 and now you have a fully configured
    multi-ESX host Oracle RAC cluster ready to be installed with the RAC software.

After the multi-host configuration, your
configuration will look like this (courtesy VMware Documentation):

Here the remote storage is your SAN or
NAS. FC or Fiber Channel is used for SAN and you will also have additional HBAs
per ESX host.

Conclusion

In the next installment, we will explore
the Shared Storage considerations where one node is still on a physical machine
and the new node is on an ESX host. Are you thinking what I am thinking? Yes
indeed, migrating your Oracle RAC from physical to virtual might just be a
matter of deleting physical nodes while you are adding new nodes on the ESX
host! You can do it all yourself!

»


See All Articles by Columnist
Tarry Singh


Tarry Singh
Tarry Singh
I have been active in several industries since 1991. While working in the maritime industry I have worked for several Fortune 500 firms such as NYK, A.P. Møller-Mærsk Group. I made a career switch, emigrated, learned a new language and moved into the IT industry starting 2000. Since then I have been a Sr. DBA, (Technical) Project Manager, Sr. Consultant, Infrastructure Specialist (Clustering, Load Balancing, Networks, Databases) and (currently) Virtualization/Cloud Computing Expert and Global Sourcing in the IT industry. My deep understanding of multi-cultural issues (having worked across the globe) and international exposure has not only helped me successfully relaunch my career in a new industry but also helped me stay successful in what I do. I believe in "worknets" and "collective or swarm intelligence". As a trainer (technical as well as non-technical) I have trained staff both on national and international level. I am very devoted, perspicacious and hard working.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles