Upgrade Oracle 9i RAC to Oracle 10g RAC - Page 2
January 31, 2006by Vincent Chan
Step 2: Install Oracle Clusterware Software
Oracle Clusterware requires two files the Oracle Cluster Registry (OCR) and the voting disk on shared raw devices or on Cluster File System (CFS). The Shared Configuration file and quorum file in Oracle 9i RAC has been renamed to Oracle Cluster Registry and voting disk respectively. These files must be accessible to all nodes in the cluster.
To avoid single point of failure, the OCR and voting disk can now be multiplexed by the database. You can create up to two OCR and up to three voting disks.
2a. Oracle Clusterware pre-installation checks
Cluster Verification Utility (CVU) reduces the complexity and time it takes to install RAC. The tool scans all the required components in the cluster environment to ensure all criteria are met for a successful installation. Additional information on CVU can be found at http://download-east.oracle.com/docs/cd/B19306_01/rac.102/b14197/appsupport.htm#BEHIJAJC
Install cvuqdisk RPM prior to running CVU:
rpm -iv /stage/clusterware/rpm/cvuqdisk-1.0.1-1.rpm
To check for shared storage accessibility, run:
/stage/clusterware/cluvfy/runcluvfy.sh comp ssa -n salmon1,salmon2
Upgrade OCFS to the minimum required version if you receive the following warning:
WARNING: OCFS shared storage discovery skipped because OCFS version 1.0.14 or later is required.
The procedure to upgrade OCFS is located at http://oss.oracle.com/projects/ocfs/dist/documentation/How_To_Upgrade.txt
To perform Oracle Clusterware pre-installation checks, run:
/stage/clusterware/cluvfy/runcluvfy.sh stage pre crsinst -n salmon1,salmon2 verbose
CVU reports the error below if non-routable IP addresses such as 192.168.*.*, 172.*.*.* or 10.*.*.* are used for the public interface (eth0).
ERROR: Could not find a suitable set of interfaces for VIPs. Node connectivity check failed.
The error can be safely ignored. As a workaround, invoke the Virtual IP Configuration Assistant (VIPCA) manually during the installation on the second node and a suitable network interface (eth0) will be detected.
2b. Install Oracle Clusterware software
There are two approaches to installing the Oracle Clusterware:
1. Upgrade the existing Shared Configuration file to Oracle 10g OCR format
2. Create new OCR files.
With option 1, the OUI will automatically upgrade the Oracle 9i Shared Configuration file to the 10g OCR format and the OCR file locator, /etc/oracle/ocr.loc points to the location of the upgraded Shared Configuration file (/ocfs/prod1/srvm). After the upgrade, the file, /var/opt/oracle/srvConfig.loc is updated to /dev/null.
[oracle@salmon1]$ more /var/opt/oracle/srvConfig.loc srvconfig_loc=/dev/null
During the upgrade, the OUI does not provide the option of specifying multiple OCR and voting disk file locations. You can however, use ocrconfig and crsctl to manually multiplex the files.
With option 2, you simply perform a fresh Oracle Clusterware installation. You have to shut down all Oracle 9i RAC processes, which include the database instances, listeners, Global Services Daemons and Oracle Cluster Manager and rename the Shared Configuration pointer file, srvConfig.loc to prevent the OUI from detecting the existing Oracle 9i RAC environment.
During the installation, you will be prompted with the option of multiplexing the OCR and voting disk.
My preference is to perform a fresh install. In this demonstration, we will use option 2 to install the Oracle 10g Clusterware software.
Mount the Oracle Clusterware CD or download the software from OTN. The OUI should be launched on only the first node. During installation, the installer automatically copies the software to the second node.
1. Welcome: Click on "Next"
[oracle@salmon1]$ more /etc/oracle/ocr.loc ocrconfig_loc=/ocfs/prod1/ocr1 ocrmirrorconfig_loc=/ocfs/prod1/ocr2 local_only=FALSE [oracle@salmon1]$ srvctl status nodeapps -n salmon1 VIP is running on node: salmon1 GSD is running on node: salmon1 PRKO-2016 : Error in checking condition of listener on node: salmon1 ONS daemon is running on node: salmon1 [oracle@salmon1]$ /u01/app/oracle/product/crs/bin/olsnodes -n salmon1 1 salmon1 2 [oracle@salmon1]$ ps -ef | egrep "cssd|crsd|evmd"