Handling Disaster Recovery When Using Oracle Managed Files

Disaster recovery should be on the mind of every Oracle DBA, even though the goal is to never need it. Nonetheless, many enterprises run regular disaster recover tests to ensure that daily business will not be interrupted should a catastrophic event affect the data center. Before the advent of Oracle Managed Files, a ‘proper’ disaster recovery configuration included generating scripts to create controlfiles, to replace the standby controlfile at the disaster recovery site so the standby can be opened and used as a ‘regular’ database. When Oracle Managed Files (OMFs) are in use, the situation changes. Let’s look at what Oracle Managed Files does and why a script to create the controlfile can’t originate from the primary database.

Oracle Managed Files, for those not familiar with them, are dynamically named based on the tablespace and use a generated value to uniquely identify them in the database. Each database installation will generate unique identifiers, and such identifiers won’t match between primary and standby databases. Thus a ‘create controlfile’ statement from the primary database will not use the local file names generated from the standby, and no controlfile will be created. Let’s look at an example using a primary database (prim_db) and its associated standby (stby_db).

PRIM_DB is configured to use Oracle Managed Files to make it easier on the DBA to add data files. This is configured using the db_create_file_dest parameter:


SQL> show parameter create_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      /oxnard
SQL>

So, for PRIM_DB, all datafiles will be located at /oxnard/PRIM_DB/datafile and will have uniquely generated file names:


...
o1_mf_sysaux_fsf2fb5y_.dbf
o1_mf_system_fsf2dkw2_.dbf
...

The standby database is also configured to use Oracle Managed Files; looking at the file names for SYSTEM and SYSAUX for that database you see:


...
o1_mf_sysaux_g02yt5l0_.dbf
o1_mf_system_g02yqhl6_.dbf
...

As mentioned previously the unique identifiers will not match between databases. In this case the script generation procedures need to shift to the standby so that a usable create controlfile script can be generated. It should not be necessary to copy the init.ora file from the primary to the standby as minimal changes will be required to run the new primary from the standby location. Those changes are:


db_unique_name
local_listener
log_archive_dest_1
<any site-specific init parameter settings also go here>

Having a second init.ora file, named init_prim.ora with these changes, will make it easy to start the new primary so the controlfile can be created.

Having a current create controlfile script ready and waiting is simple and straightforward using the ‘alter database backup controfile to trace’ command; as an example such a command is shown below:


set echo on
spool /oxnard/oracle/sql/stby_db_ctlfile.out
alter database backup controlfile to trace as '/home/oracle/recov_inits/stby_db_ccf.txt' reuse resetlogs;
spool off
exit

The ‘create controlfile’ script created will generate one CREATE CONTROLFILE statement rather than two, restricting the output to the REEETLOGS option and reusing the existing controlfile name. [Note that a ‘normal’ create controlfile script will generate two statements, a RESETLOGS option and a NORESETLOGS option, and in the case of a DR exercise the RESETLOGS option is the one that is used, thus the restriction.] Since this statement is generated from the standby, the file names will be correctly specified and the new, regular controlfile will create successfully.

Using OMF also changes how the TEMP files are created; since file and path names are no longer required when creating such files the command is simplified:


SQL> alter tablespace temp add tempfile size [value];

The same command can be executed multiple times to create as many tempfiles as necessary.

Using Oracle Managed Files with Data Guard configurations changes how controlfiles are created, but it isn’t a difficult change to implement. It also eliminates issues across network links as the files are created locally. Create a location to store them and set up a cron task to execute the script on a weekly basis (to catch any new datafiles created on the primary) and should the need arise to activate the standby the task should be a fairly simple one.

David Fitzjarrell
David Fitzjarrell
David Fitzjarrell has more than 20 years of administration experience with various releases of the Oracle DBMS. He has installed the Oracle software on many platforms, including UNIX, Windows and Linux, and monitored and tuned performance in those environments. He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace – and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE, ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the quality of the production system. He has also set up scripts to regularly monitor available space and set thresholds to notify DBAs of impending space shortages before they affect the production environment. These scripts generate data which can also used to trend database growth over time, aiding in capacity planning. He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full recoverability and failover capabilities as well as high availability, and has configured a 'cascading' set of DR databases using the primary DR databases as the source, managing the archivelog transfers manually and montoring, through scripts, the health of these secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance and manage storage and shared memory.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles