Top 10 New Features for Oracle 11g Data Guard – Part One

Disaster recovery is an ongoing concern for Oracle database administrators. Taking steps and configuring systems for recoverability is a major area of responsibility, and in the Oracle world, there are many solutions available to us. Among these solutions are Real Application Clusters (RAC and potentially extended RAC), Oracle Streams and Oracle Data Guard. When a determination is made that a “hot” standby database is the best preventative measure, Data Guard is the Oracle product that should most come to mind.

Data Guard is Oracle’s standby database solution, and it is a technology that has been around in various iterations all the way back to Oracle 7.

However, in Oracle 11g (between both 11gR1 and 11gR2), Oracle has introduced many, many new additions to greatly enhance the Data Guard product, which continues to gain in both popularity and functionality.

Out of the many new features, I have selected my top 10 favorite ones from both versions. In this article, I will be reviewing numbers ten through six. Next month I will finish off with favorite features number five through one.

10. DB_LOST_WRITE_PROTECT Parameter

If the I/O subsystem ever acknowledges the completion of a write that is subsequently determined to have not been done correctly on the persistent storage device, a situation could occur where the I/O system returns a stale version of a data block for a subsequent query. If that data is used to update other blocks in the database, we could end up with corrupted blocks.

The DB_LOST_WRITE_PROTECT parameter has been introduced to protect the primary and standby databases from such problems. Setting this parameter allows the database to record buffer cache block reads into the online redo logs to use in detecting lost writes.

The possible values for this new parameter are NONE where no lost block write detection is done, TYPICAL where detection is done only for read/write tablespaces, and FULL where write detection is done for both read only and read/write tablespaces. Setting the value to TYPICAL could result in as much as a 5 to 10% performance hit, and FULL could go as high as 20%, so the performance factor should be taken into consideration.

When the redo is being applied on the standby database, the SCN numbers are compared between the two systems. If a lower number is found on the primary database, an ORA-752 error is recorded into the alert log. If a lower number is found on the standby database, an ORA-600 [3020] error is recorded.

The best solution if a lost write is detected on the primary system is to fail over to your standby and recreate the primary. If it is detected on the standby system, you should recreate the standby system, or at least the affected files.

9. New Settings to Improve Performance

Performance of the standby databases in a Data Guard configuration was one of the areas that Oracle focused on in 11g. To that end, in R1 they introduced a number of new attributes for the LOG_ARCHIVE_DEST_n parameters to help improve the performance of transferring the redo data to the standby database and also improve performance of the apply side activities. These include:

NetTimeout (actually a 10g setting, but worth mentioning): This is an attribute of the LOG_ARCHIVE_DEST_n parameter that allows us to override the default value of the network timeout interval on the system hosting the primary database. If this value has not been specified, the primary database could potentially stall for the length of the default network setting. If we set a smaller value for this parameter, the primary database can mark a destination as failed based on a tighter time frame.

MaxConnections (a 10g setting worth mentioning): This attribute allows multiple archiver processes to transfer the contents of a large redo log in parallel. This setting helps speed up the transfer of redo data if a gap in the redo data has been detected on the standby system. The default is 1 and the maximum is 5. In order for this setting to take effect, the LOG_ARCHIVE_MAX_PROCESSES parameter must be equal to or higher than the MaxConnections setting.

RedoCompression (11g R1): This parameter compresses the redo data prior to transmitting it to the standby database, which can significantly reduce the amount of data being transferred across the network, resulting in reduced bandwidth usage. By default this setting is disabled. Enabling it requires the Advanced Compression Option.

8. Using DBMS_SCHEDULER to Run Jobs on a Logical Standby Database (11g R1)

One of the primary reasons that DBAs set up logical standby databases has been the ability to offload work from the primary database onto the standby database.

When a job is created on the primary database using DBMS_SCHEDULER, that job is defaulted to run on the local (primary) database. If the job is created on the standby database, it is set to run on the standby system.

Scheduler jobs created on the primary database are transferred to the standby database, but they are not set up to automatically run on the standby database.

Using the DBMS_SCHEDULER.SET_ATTRIBUTE procedure, we can now create a job on the the primary database that will be configured to run on the logical standby database. You can set the new attribute “database_role” to either “primary” or “logical standby.” If you want to create a job that will run on both, you would have to create two copies of the job.

The view DBA_SCHEDULER_JOB_ROLES will show you which jobs have been set up to run in which database role.

The full syntax is:

7. LOG_AUTO_DEL_RETENTION_TARGET (11gR1)

Once the logical standby database (SQL Apply process) has completed, applying all of the contents from the transferred redo logs, the archived copy of the log that has been processed is deleted from the archive directory. This is governed by a parameter LOG_AUTO_DELETE, which defaults to true.

Generally this is not a huge problem. However, if an incomplete recovery or flashback operation were to be done on the logical standby database, the SQL_APPLY process stops and has to backtrack and re-fetch all of the required remote archive files in order to do the flashback or incomplete recovery. Needless to say, this can take a fair amount of time depending on how far back we are trying to take the logical standby database.

You can now specify a retention time for the logical standby database to keep the archived copies of the redo logs. The default retention time is 1440 minutes (24 hours).

To change the setting to something like three days you would use the following syntax:

DBMS_LOGSTDBY.APPLY_SET(‘LOG_AUTO_DEL_RETENTION_TARGET’,’4320′);

You can use the APPLY_UNSET procedure to put this value back to its default value.

These settings only apply if the location that the files are being archived to is NOT the fast (flash) recovery area. If the fast (flash) recovery area is being used, the archive files would be retained for the value of the fast recovery area target retention period.

6. Duplicating the Standby Database From an Active Database (11gR1)

Technically this could be listed as an RMAN new feature because RMAN can be used to duplicate a non-standby database from an actively running target database, but because the syntax includes the option for a standby database, I think it’s worth mentioning in this article.

Once your primary database is fully prepared to be duplicated, running the duplicate from active standby makes it faster to create the standby database. The time savings come from not having to restore from an old backup and then apply all of the archived and online redo logs to ensure that the new standby database is in sync with the primary database.

Additionally, RMAN has incorporated the ability to automatically pass the new parameters to be incorporated into the spfile for the new database directly into the duplicate command.

Here is a sample of the RMAN commands (assuming you are connected to the target and auxiliary instances):


DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE
SPFILE
PARAMETER_VALUE_CONVERT 'primdb', 'stbydb'
SET DB_UNIQUE_NAME='stbydb'
SET DB_FILE_NAME_CONVERT='/u01/primdb/','/u02/stbydb/'
SET LOG_FILE_NAME_CONVERT='/u01/primdb/','/u01/stbydb/'
SET CONTROL_FILES='/u01/primdb/controlfile1.ctl'
SET LOG_ARCHIVE_MAX_PROCESSES='7'
SET FAL_CLIENT='stbydb'
SET FAL_SERVER='primdb'
SET STANDBY_FILE_MANAGEMENT='auto'
SET LOG_ARCHIVE_CONFIG='DG_CONFIG=('primdb','stbydb')
SET LOG_ARCHIVE_DEST_2='service=primdb SYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=primdb')

You can still continue to use the duplicate from the backups if needed by simply leaving the FROM ACTIVE DATABASE clause off. When running the RMAN duplicate command against the active database, it would have a slight impact on the performance of your target database, similar to the impact of running a hot backup. Also, to be able to use this feature, your database should be running in archivelog mode.

As you can probably tell, there are a huge number of new features for Data Guard that have been introduced by Oracle in 11g. Watch for my top five favorites in my next article.

See all articles by Karen Reliford

Karen Reliford
Karen Reliford
Karen Reliford is an IT professional who has been in the industry for over 25 years. Karen's experience ranges from programming, to database administration, to Information Systems Auditing, to consulting and now primarily to sharing her knowledge as an Oracle Certified Instructor in the Oracle University Partner Network. Karen currently works for TransAmerica Training Management, one of the foremost Oracle Authorized Education Centers (OAEC) in the Oracle University North America region. TransAmerica Training Management offers official Oracle and Peoplesoft Training in Coral Gables FL, Fayetteville AR, Albuquerque NM, Providence RI and San Juan PR. Karen has now been teaching Oracle for Oracle University for more than 15 years. Karen has attained her Certified Technical Trainer designation along with several Oracle certifications including OCP-DBA, OCP-Internet Developer, Oracle Expert - Oracle 10g RAC and Oracle Expert - Oracle Application Express (3.2). Additionally, Karen achieved her Oracle 10g Oracle Certified Master (OCM) in 2008. Karen was raised in Canada, and in November 2009 became a US Citizen. Karen resides in Columbus OH with her husband, Ron along with their 20 pets, affectionately referred to as the "Reliford Zoo".

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles