Leveraging Logical Standby Databases in Oracle 11g Data Guard

Synopsis. Oracle Data Guard still offers support for the venerable logical standby database in Oracle Database 11g. This article – the eighth in this ongoing series on Oracle Data Guard – investigates how data warehouse and data mart environments can effectively leverage logical standby database features, but simultaneously provide a final destination when failover from a primary database is mandated during disaster recovery.

The prior article in this series explored how to implement Fast-Start Failover between an Oracle 11gR1 primary and physical standby database, including how to:

  • Configure the primary database and one physical standby database for Fast-Start Failover (FSF)
  • Activate, monitor, and relocate the Fast-Start Failover Observer (FSFO)
  • Insure against the loss of a single FSFO via Enterprise Manager Grid Control

This article will illustrate how to:

  • Create a logical standby database from an existing physical standby database
  • Manage application of redo entries to specific data within a logical standby database
  • Determine the extent of the “recoverability gap” in a logical standby database
  • Understand the implications of failing over to a logical standby database

Before I tackle all these tasks, however, I’ll review some of the benefits and disadvantages of logical standby databases as well as delve into the architecture of how redo data is applied to a logical standby database via SQL Apply.

Logical Standby Databases: Benefits

Even with all the new features that Oracle 11g adds for physical standby databases (e.g. Real Time Apply), a logical standby database still offers some interesting alternatives for offloading a reporting workload, protection against physical block corruption, patching, and even disaster recovery purposes:

Effective System Resource Use. A logical standby database can function as an independent production database that still accepts updated data and transactions from the primary database. However, unlike a physical standby database, a logical standby’s physical structure may be different (e.g. different tablespaces or datafiles). In addition, change data may be selectively applied to particular tables and indexes. It’s therefore possible for a logical standby database to offer data from a combination of sources: tables that are perfectly synchronized with the primary database, plus tables whose changed data has been excluded from application to the logical standby database, and even tables which exist only in the logical standby database.

Primary Database Workload Reduction. Since a logical standby database will capture change data for all supported datatypes – more on this later! – this also means that it’s possible to build materialized views, materialized view logs, and even additional indexes on existing production tables that would be prohibitively expensive to maintain against the same tables on the primary database. Logical standby databases are therefore excellent targets for data warehouse, data mart, or decision support applications that only need to read production data, but that could also utilize additional constructs (e.g. bitmap indexes) for speedier query processing. Logical standby databases are also excellent alternatives to databases created using basic replication because they’re much easier to maintain.

Increased Data Protection. Because a logical standby database’s datafiles aren’t an exact physical copy of the primary database’s datafiles, it offers protection against physical block corruption on the primary database because redo is applied using SQL Apply methods. And just as with a physical standby database, it’s also possible to delay DML that’s been generated on the primary database from being applied against the logical standby database by implementing delay features.

Patching. Because it can assume the role of the primary database during a switchover operation, a logical standby database can facilitate extremely quick Oracle Database patching and upgrade operations. The patches are first applied to the original logical standby database, which then assumes the primary database role after switchover is complete. Once the original primary database is converted to a logical standby database, it can be upgraded as well.

Caveat DBA: Limitations of Logical Standby Databases

While logical standby databases have some excellent benefits, they do have some definite drawbacks as well, including restrictions on unsupported objects and datatypes, limited support for DDL and DCL commands, and what I believe are some glaring implications for failover situations:

Unsupported Objects. A logical standby database cannot accept updates for the following objects:

  • Any tables or sequences owned by SYS
  • Any tables that use table compression
  • Any tables that underlie a materialized view
  • Global temporary tables (GTTs)

Unsupported Data Types. In addition, any tables that are comprised of or whose columns contain the following unsupported data types cannot be updated in a logical standby:

  • Datatypes BFILE, ROWID, and UROWID
  • Any user-defined TYPEs
  • Multimedia data types like Oracle Spatial, ORDDICOM, and Oracle Text
  • Collections (e.g. nested tables, VARRAYs)
  • SecureFile LOBs
  • OBJECT RELATIONAL XMLTypes
  • BINARY XML

Data dictionary view DBA_LOGSTDBY_UNSUPPORTED_TABLE shows all tables that aren’t supported for update, and data dictionary view DBA_LOGSTDBY_UNSUPPORTED shows all tables that are not supported for update because of their datatypes (tables that use compression or are owned by SYS are already suppressed from this view).

Unsupported DDL and DCL Statements. Oracle 11gR1 doesn’t pass on several DDL and DCL statements from the primary database to a counterpart logical standby database, and most of these restrictions make perfect sense. For example, the ALTER DATABASE, ALTER SESSION, ALTER SYSTEM, and CREATE MATERIALIZED VIEW commands will not be passed from the primary to any of its logical standby database(s). Appendix C of the Oracle Database 11gR1 Data Guard Concepts and Administration Guide contains a complete list of these unsupported DDL commands.

Unique Row Identification Required. It’s possible that rows added to a target table in a logical standby database will not have the identical value for ROWID as their counterpart table in the primary database. Therefore, all logical standby target tables must enforce uniqueness via either a PRIMARY KEY constraint or a UNIQUE constraint. If row uniqueness is maintained within an application instead of using database referential integrity constraints, then the PRIMARY KEY constraint can be switched to a state of RELY DISABLE. This essentially places the constraint into NOVALIDATE mode and eliminates the overhead of primary key maintenance on the primary database. Data dictionary view DBA_LOGSTDBY_NOT_UNIQUE shows all tables that don’t contain sufficiently-unique rows.

To demonstrate some of these these restrictions, I’ll construct some new database objects in the primary database:

  • Table OE.CUST_PHONE_NBRS contains a list of customer phone numbers stored as instances of a new PL/SQL object type, OE.I18L_PHONE_NBR. Because this table contains a user-defined TYPE it will automatically become an unsupported table in the logical standby database.
  • Table OE.SUBCUSTOMERS, on the other hand, is fully supported for DML in the logical standby database; however, I’ve also specified its primary key constraint with the RELY DISABLE directive so I can demonstrate how the logical standby database handles a duplicate row that sneaks past application logic.

Listing 8.1 shows the DDL and DML I used to create and populate the tables in the primary database, and Listing 8.2 shows the results of queries against views DBA_LOGSTDBY_UNSUPPORTED_TABLE and DBA_LOGSTDBY_UNSUPPORTED after these objects have been created.

Disaster Recovery. A logical standby database can still be a target of a switchover or failover from the primary database. However, since there are no guarantees that all data will be present in the logical standby database, it cannot be used for the complete recovery of the primary database. Further, since the logical standby database’s physical structure won’t be identical to the primary database’s physical structure, a logical standby database cannot be used as an alternate target for RMAN backup processing. Finally, perhaps the most glaring issue with failing over to a logical standby database is that once failover is complete, all other standby databases must either be reinstated or recreated.

Applying Redo Via SQL Apply

The main difference between physical and logical standby databases is the manner in which changes to data are applied. For a logical standby database, the change vectors stored in incoming redo logs are transformed back into DML statements using LogMiner technology, and those transformed DML statements are then applied to the appropriate objects. This concept is collectively termed SQL Apply. It’s quite elegant, and it uses several parallel execution servers and background processes that handle the application of changed data. Please see Figure 8.1 for a corresponding flowchart of the steps listed below:

Step 1: Acceptance. The READER process reads change vectors from the incoming archived redo logs.

Step 2: Conversion. PREPARER processes then convert each change vector into either table changes or Logical Change Records (LCRs). One redo log file may be handled by one or more PREPARERs, which then stage the LCRs in the LCR Cache in the shared pool of the logical standby database’s SGA. Note that these LCRs are not really representative of any specific transactions … yet.

Step 3: Assembly. The BUILDER process then assembles transactions from groups of individual LCRs.

Step 4: Analysis. Next, the ANALYZER process reviews all LCRs, eliminates any transactions that are unnecessary, and determines which transactions (if any) have dependencies (e.g. parent-child relationships). It also sorts them so that they’ll be applied in an appropriate order to avoid violation of referential integrity constraints.

Step 5: Coordination. The COORDINATOR process (LSP):

  • Assigns transactions to different APPLIER processes;
  • Watches for transaction dependencies;
  • Coordinates scheduling between the APPLIERs; and
  • Authorizes (but doesn’t perform!) COMMITs of the changes to the logical standby.

Step 6: Application. Finally, multiple APPLIER processes:

  • Apply LCRs to the logical standby;
  • Communicate with the COORDINATOR to approve when transactions have unresolved dependencies, thus allowing the COORDINATOR to schedule dependent transactions so referential integrity is not violated; and
  • COMMIT the transactions.

Creating a Logical Standby Database

Creating a logical standby database is a relatively straightforward process. Note that while I could do this using Enterprise Manager Grid Control’s Add Standby Database wizard, I’ve used manual methods here to better illustrate all that’s involved in its creation:

Step 1: Create a Candidate Physical Standby Database. First, I’ll create a new physical standby database named LSBY on my standby database server that I’ll use as the eventual target for my logical standby database. I’ll utilize the same methodology to create the new database as I did in the first article in this series.

Step 2: Halt Redo Apply on Candidate Physical Standby Database. Once the candidate physical standby database has been created, I’ll need to halt all redo apply processes. This insures that any change data won’t be applied to the candidate physical standby until I complete the creation of the LogMiner dictionary in an upcoming step:

# Execute from SQL*Plus connected to LSBY database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Step 3: Prepare the Primary Database to Support Logical Standby Database Requirements. Next, I’ll modify the initialization parameters of the primary database to prepare it for the eventual transformation of the candidate physical standby database to a logical standby database:

# Execute from SQL*Plus connected to ORCL_PRIMARY database:

# Set up for later role transition to logical standby
SQL> ALTER SYSTEM SET log_archive_dest_3 = LOCATION=/u01/app/oracle/oradata/archivelog/
2>   VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) DB_UNIQUE_NAME=lsby;

# Halt possibility of ORA-01555 (snapshot too old) during LogMiner dictionary creation
SQL> ALTER SYSTEM SET undo_retention=3600;

Step 4: Build a LogMiner Dictionary. A LogMiner dictionary is required so that the logical standby database’s Log Apply Services can make sense of incoming change vectors and translate them to appropriate DML, DDL and DCL statements. Executing procedure DBMS_LOGSTBY.BUILD creates the LogMiner dictionary, enables supplemental logging, and then waits for any existing transactions on the primary database to complete:

# Execute from SQL*Plus connected to ORCL_PRIMARY database:
SQL> EXEC DBMS_LOGSTBY.BUILD;

Step 5: Convert the Physical Standby Database to a Logical Standby Database. At this point, the candidate physical standby database is ready to be converted to a logical standby database. I’ll complete its transformation with a few simple SQL commands issued against the candidate physical standby database:

# Execute from SQL*Plus connected to LSBY database:
SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY lsby;

# Shut down and remount the LSBY database:
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;

Step 6: Open the Logical Standby Database. All that’s left to do is to open the new logical standby database and initiate SQL Apply processing against it:

# Reset the LSBY logical standby database's incarnation ...
SQL> ALTER DATABASE OPEN RESETLOGS;

# ... and activate it as an logical standby by starting SQL Apply processes ...
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

Step 7: Verify Logical Standby Database Configuration and Performance. At this point, it’s crucial to verify that archived redo logs are getting transmitted to and applied at the new logical standby database. I’ve captured the appropriate queries to do this in Listing 8.3; alternatively, I could simply monitor the logical standby database’s alert log to insure that redo is indeed being transmitted to and applied against that database. I’ve highlighted several sections of database LSBY’s alert log that show this to be the case.

Step 8: Enroll the Logical Standby Database With Data Guard Broker. Finally, I’ll register the LSBY database with Data Guard Broker with a few simple commands from a DGMGRL session:

DGMGRL> ADD DATABASE lsby AS CONNECT IDENTIFIER IS 'lsby';
DGMGRL> ENABLE DATABASE lsby;

In Listing 8.4, I’ve laid out the alert logs from databases ORCL_PRIMARY and LSBY, concentrating on the eventual transformation of the LSBY database from a physical standby to a logical standby database. Listing 8.5 shows the resulting Data Guard Broker configuration and corresponding settings for the primary (ORCL_PRIMARY), physical standby (ORCL_STDBY1), and logical standby (LSBY) databases just after the transformation of LSBY into a logical standby database was complete.

SQL Apply Filtering

As I previously described in the list of logical standby database features, a logical standby database doesn’t have to accept and apply all incoming DML; in fact, these DML can be filtered (and thus ignored) based on either:

  • the specific target of a SQL statement, e.g. a table; or
  • the result of an error; or even
  • the transaction ID of a problem transaction.

To illustrate this feature, I’ll use the DBMS_LOGSTDBY.SKIP procedure to construct and apply a simple SQL Apply filter that prohibits the application of any DML against the OE.SUBCUSTOMERS table I built earlier:

SET SERVEROUTPUT ON
BEGIN
    DBMS_LOGSTDBY.SKIP(
         stmt => 'DML'
        ,schema_name => 'OE'
        ,table_name => 'SUBCUSTOMER',
        ,proc_name => NULL
    );
EXCEPTION
    WHEN OTHERS THEN 
        DBMS_OUTPUT.PUT_LINE('Failure during setup of DML restrictions on OE.SUBCUSTOMER');
END;
/

If I now issued DML against the OE.SUBCUSTOMERS table on the primary database – for example:

UPDATE oe.subcustomers
   SET last_name = 'Smith-Patel'
 WHERE MOD(customer_id, 5) = 0;

COMMIT;

then even though it’s been successfully committed on the primary database, Oracle 11gR1 Data Guard would prohibit the transaction from being propagated to the logical standby database.

Managing SQL Apply Filters Through DGMGRL. Alternatively, I could have also implemented this SQL Apply “anti-DML” filter against table OE.SUBCUSTOMERS through Data Guard Broker Manager by setting appropriate database properties for the LSBY logical standby database. Table 8.1 describes the available SQL Apply filtering that DGMGRL can activate or deactivate:

Table 8.1. SQL Apply Properties

Database Property

Application Result

LsbyASkipCfgPr

Tells SQL Apply to skip any SQL statements that should be ignored by the logical standby database. Uses the same arguments as would be supplied to procedure DBMS_LOGSTDBY.SKIP

LsbyASkipErrorCfgPr

Tells SQL Apply to halt SQL Apply processing on the logical standby database when a particular error is encountered. Uses the same arguments as would be supplied to procedure DBMS_LOGSTDBY.SKIP_ERROR

LsbyASkipTxnCfgPr

Tells SQL Apply to ignore a problematic transaction as defined by that transaction’s transaction ID (XIDSQN). Uses the same arguments as would be supplied to procedure DBMS_LOGSTDBY.SKIP_TRANSACTION procedure

LsbyDSkipCfgPr

Removes an existing SQL Apply skip specification created via LsbyASkipCfgPr

LsbyDSkipErrorCfgPr

Removes an existing SQL Apply skip error criteria built via LsbyASkipErrorCfgPr

LsbyDSkipTxnCfgPr

Reverses the actions of an existing setting for LsbySkipTxnCfgPr

Managing SQL Apply Filters Through Enterprise Manager Grid Control. SQL Apply settings can also be viewed and manipulated from the Edit Standby Database Properties panel in EM Grid Control.

ALTER DATABASE GUARD: Guarding Against Undesired Changes

By default, a logical standby database’s data is locked from any unwarranted or accidental modification (except by the SYS user). Oracle provides the ALTER DATABASE GUARD <directive>; command to modify the permitted limits of modification to logical standby data:

Table 8.2. Values for ALTER DATABASE GUARD Directive

Directive

Implication

ALL

(Default) No users (except SYS) are allowed to make changes to logical standby data

STANDBY

Users may modify logical standby data unless it’s being maintained by local LSPn processes (i.e. via Logical Standby)

NONE

Normal security rules are applied; any user with appropriate privileges can modify logical standby data

Managing Remote Archived Log File Retention

If the logical standby has been set up using Oracle 11g recommended best practices, then a Flash Recovery Area (FRA) was defined during its creation, and any archived redo logs that have been applied to the logical standby will be automatically deleted whenever sufficient space pressure is detected on the FRA. As an alternative to the FRA, however, Oracle 11g’s DBMS_LOGSTBY package offers two parameters to control archived redo log cleanup for a logical standby. Procedures APPLY_SET and APPLY_UNSET use these parameters to determine when cleanup should occur:

  • If set to TRUE (the default), then LOG_AUTO_DELETE tells Oracle to delete archived redo logs once more than LOG_AUTO_DEL_RETENTION_TARGET minutes have elapsed since all transactions in the archived redo log were applied.
  • LOG_AUTO_DEL_RETENTION_TARGET specifies how long to wait (in minutes) until archived redo logs can be deleted. It defaults to 1440 minutes (24 hours).

Scheduling Jobs Via DBMS_SCHEDULER

Interestingly, it’s also possible to create a scheduled job with DBMS_SCHEDULER on a logical standby database with a few (obvious) caveats. First, a scheduled job can exist on either a primary database or a logical standby database, but a scheduled job cannot exist on a physical standby database because it would then be possible for a scheduled task to run on both the primary and its physical standby counterpart – a most undesirable outcome!

However, whenever a primary database switches roles with its counterpart physical standby database, the new primary database will automatically resume execution of scheduled tasks previously scheduled on the original primary database. DBMS_SCHEDULER uses the DATABASE_ROLE attribute of a job (modified with the SET_ATTRIBUTE procedure) to determine whether it should run on either (a) the primary database, or (b) a specified logical standby database. Since there can only be one primary database in an Oracle 11g Data Guard configuration, there’s no possibility conflict between the scheduled tasks.

Logical Standby Database Metadata

Finally, Oracle 11g includes several dynamic views (see Table 8.3) and data dictionary views (see Table 8.4) that provide status information and performance statistics for logical standby databases. I’ve provided several useful queries against these views formatted for SQL*Plus in script MonitorLSBY.sql.

Table 8.3. Logical Standby Databases: Views for Monitoring Performance

View

Description

V$LOGSTDBY_PROCESS

Shows status of Log Apply Services background processes

V$LOGSTDBY_STATS

Summarizes Logical Standby Database “running state”

V$LOGSTDBY_PROGRESS

Determines which SCNs have already been applied via Logical Apply Services

V$LOGSTDBY_TRANSACTION

Lists all Logical Standby transactions that are currently in progress

Table 8.4. Logical Standby Databases: Data Dictionary Views

View

Description

DBA_LOGSTDBY_UNSUPPORTED

Lists tables unsupported for Logical Standby because at least one column has an unsupported datatype

DBA_LOGSTDBY_UNSUPPORTED_TABLE

Lists any tables that are unsupported for Logical Standby

DBA_LOGSTDBY_NOT_UNIQUE

Lists tables unsupported for Logical Standby because they don’t have unique row identifiers (e.g. primary / unique key)

DBA_LOGSTDBY_LOG

Shows redo logs registered for use of Logical Standby redo application

DBA_LOGSTDBY_SKIP

Displays tables that will be skipped by Log Apply Services

DBA_LOGSTDBY_SKIP_TRANSACTION

Displays which transactions will be skipped because Logical Standby filters are in place

DBA_SCHEDULER_JOB_ROLES

Shows which Data Guard roles are specific to a scheduled job

Next Steps

In the final article in this series, I’ll put everything I’ve discussed about Oracle 11g Data Guard into practice by demonstrating how to configure a primary database and a physical standby database within an Oracle 11g Real Application Clusters environment.

References and Additional Reading

While I’m hopeful that I’ve given you a thorough grounding in the technical aspects of the features I’ve discussed in this article, I’m also sure that there may be better documentation available since it’s been published. I therefore strongly suggest that you take a close look at the corresponding Oracle documentation on these features to obtain crystal-clear understanding before attempting to implement them in a production environment. Please note that I’ve drawn upon the following Oracle Database 11g documentation for the deeper technical details of this article:

B28279-02 Oracle Database 11g New Features Guide

B28294-03 Oracle Database 11g Data Guard Concepts and Administration

B28295-03 Oracle Database 11g Data Guard Broker

B28320-01 Oracle Database 11g Reference Guide

B28419-02 Oracle Database 11g PL/SQL Packages and Types Reference

Also, the following MetaLink documentation helps clarify this feature set:

728053.1 Configure RMAN to Purge Archivelogs After Applied On Standby

» See All Articles by Columnist Jim Czuprynski

Jim Czuprynski
Jim Czuprynski
Jim Czuprynski has accumulated over 30 years of experience during his information technology career. He has filled diverse roles at several Fortune 1000 companies in those three decades - mainframe programmer, applications developer, business analyst, and project manager - before becoming an Oracle database administrator in 2001. He currently holds OCP certification for Oracle 9i, 10g and 11g. Jim teaches the core Oracle University database administration courses on behalf of Oracle and its Education Partners throughout the United States and Canada, instructing several hundred Oracle DBAs since 2005. He was selected as Oracle Education Partner Instructor of the Year in 2009. Jim resides in Bartlett, Illinois, USA with his wife Ruth, whose career as a project manager and software quality assurance manager for a multinational insurance company makes for interesting marital discussions. He enjoys cross-country skiing, biking, bird watching, and writing about his life experiences in the field of information technology.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles