Using Oracle Database 11gR2's New ASM Features During ASM Migration
April 22, 2010
Synopsis. Oracle Database 11g Release 2 (11gR2) offers several new Automatic Storage Management (ASM) features for managing both Oracle database files as well as files stored within its new ASM Clustered File System (ACFS). This article illustrates how to upgrade an Oracle 11gR1 database quickly and efficiently to an Oracle 11gR2 database home and then migrate all of its database files so they’re resident within ASM-managed storage.
The Oracle Database 11gR2: Building an ASM Clustered File System (ACFS) explained how to:
This article will illustrate how to:
Upgrading an Existing Oracle Database 11gR1 to Oracle 11gR2
Now that I have a new Oracle 11gR2 database home installed and configured, I’ll finally turn my attention to running an Oracle 11gR2 single-instance database within that new Oracle home. To accomplish this, I’ve selected one of my original “test bed” Oracle 11gR1 databases as the target of a manual upgrade to the latest release of Oracle (184.108.40.206.0).
Preparing to Upgrade Manually. To make this scenario a bit more interesting, I’ll also be upgrading my “test bed” database from a 32-bit Oracle Enterprise Linux (OEL) environment to a corresponding 64-bit OEL environment:
Performing the Manual Upgrade. Now that I’m ready to proceed with the manual upgrade, I spooled my output to a new log file and then ran script catupgrd.sql from the $ORACLE_HOME/rdbms/admin directory; after about 30 minutes, it had successfully completed all of its conversion tasks. I’ve captured just the very beginning and very end of the spooled output – there were over 110,000 lines! – and it’s displayed in Listing 3.2. Once the upgrade process was complete, I ran procedure utlrp.sql from the $ORACLE_HOME/rdbms/admin directory to recompile any PL/SQL objects that were still invalid, and my database was ready for action.
Converting to ASM File System Storage
I also have to tackle conversion of my newly-upgraded Oracle 11gR2 database from its “native” Linux-based EXT3 file system to using the Oracle 11gR2 ASM file system instead. Thankfully, this is easier than ever because 11gR2 has added some excellent new functionality into ASM, as I’ll demonstrate shortly; but first, I’ll create another new ASM disk group, +FRA, that will become my new Oracle 11gR2 ASM-based Fast Recovery Area. I’ll use the asmca utility to complete this prerequisite task:
As Figure 3.1 confirms, there’s no current FRA ASM disk group, so I’ll create one from the available candidate mount points shown in Figure 3.2 below.
Note that I’ve decided to use external redundancy for this new disk group; in a real-world scenario, of course, I’d like to be sure that these mount points are protected from data loss using some sort of RAID configuration. Figure 3.3 shows the end result of my configuration efforts:
Another way to confirm the successful creation of this disk group is through the enhanced command set that the new 11gR2 ASMCMD utility provides. Listing 3.3 shows a sample of the output produced when I invoked the lsdg command within an ASMCMD session.
Now that I’ve got a new ASM disk group as a target for my ASM-based Fast Recovery Area, I’m ready to migrate my database to ASM storage using a quite simple five-step approach:
1.) Migrating the SPFILE. I’ll first convert the database’s SPFILE so that my database will use ASM storage for any new database files that need to be created during the migration (which, of course, will be all of my database’s files). To do this, I’ll create a PFILE from the existing SPFILE, and then add a few parameters to it so that Oracle Managed Files (OMF) will point at the appropriate ASM disk groups:
*.db_create_file_dest = "+DATA" *.db_create_online_log_dest_1= "+DATA" *.db_recovery_file_dest = "+FRA"
Once these changes are made, I’ll build a new SPFILE from that PFILE within ASM storage in the new Oracle 11gR2 database home:
SQL> CREATE SPFILE FROM PFILE=’/home/oracle/initorcl.toasm’;
To confirm everything is working properly, I’ll start up the database instance in NOMOUNT mode. Here’s the result of that STARTUP NOMOUNT command from the perspective of the database’s alert log:
Thu Apr 15 16:48:42 2010 Starting ORACLE instance (normal) LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 Picked latch-free SCN scheme 3 Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST Autotune of undo retention is turned on. IMODE=BR ILAT =27 LICENSE_MAX_USERS = 0 SYS auditing is disabled NOTE: Volume support enabled Starting up: Oracle Database 11g Enterprise Edition Release 220.127.116.11.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options. Using parameter settings in client-side pfile /home/oracle/initorcl.toasm on machine 11gR2Base System parameters with non-default values: processes = 150 sga_max_size = 776M shared_pool_size = 128M large_pool_size = 4M java_pool_size = 4M streams_pool_size = 4M sga_target = 752M control_files = "/u01/app/oracle/oradata/orcl/control01.ctl" control_files = "/u01/app/oracle/oradata/orcl/control02.ctl" control_files = "/u01/app/oracle/oradata/orcl/control03.ctl" db_block_size = 8192 db_cache_size = 376M compatible = "18.104.22.168.0" db_create_file_dest = "+DATA" db_create_online_log_dest_1= "+DATA" db_recovery_file_dest = "+FRA" . . .
Note that the database instance is still pointed at the original control files on non-ASM storage; that’s the next thing I’ll tackle during this migration process.
2.) Migrating the Control Files. Since the instance is already started, I’ll issue the following SQL*Plus commands to prepare for migrating multiplexed versions of the control files, one each to the +DATA and +FRA disk groups:
ALTER SYSTEM SET CONTROL_FILES=’+DATA’,’+FRA’ SCOPE=SPFILE SID=’*’;
This forces the SPFILE to register the new locations for the control files. Next I’ll perform a SHUTDOWN ABORT, restart the instance, and then confirm the new parameters are now in place by checking the alert log after the instance restart:
. . . Using parameter settings in server-side spfile /u01/app/oracle/acfsmounts/acfs_db1/dbs/spfileorcl.ora System parameters with non-default values: processes = 150 sga_max_size = 776M shared_pool_size = 128M large_pool_size = 4M java_pool_size = 4M streams_pool_size = 4M sga_target = 752M control_files = "+DATA" control_files = "+FRA" db_block_size = 8192 db_cache_size = 376M compatible = "22.214.171.124.0" db_create_file_dest = "+DATA" db_create_online_log_dest_1= "+DATA" db_recovery_file_dest = "/u01/app/flash_recovery_area" db_recovery_file_dest_size= 8G . . .
With the database still in NOMOUNT mode, I’ll now restore a copy of the current control file to the two multiplexed locations via a few simple RMAN commands:
[oracle@11gR2Base ~]$ rman target / Recovery Manager: Release 126.96.36.199.0 - Production on Thu Apr 15 17:20:33 2010 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCL (not mounted) RMAN> RESTORE CONTROLFILE FROM '/u01/app/oracle/oradata/orcl/control01.ctl'; Starting restore at 2010-04-15 17:20:59 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=24 device type=DISK channel ORA_DISK_1: copied control file copy output file name=+DATA/orcl/controlfile/current.257.716404861 output file name=+FRA/orcl/controlfile/current.256.716404861 Finished restore at 2010-04-15 17:21:07
Excellent! Once I’ve issued the ALTER DATABASE MOUNT; command from SQL*Plus to mount these two newly-copied control files, it’s time to move on to the next migration step.