Synopsis. Oracle 10g offers significant
enhancements that help insure the high availability of any Oracle database, as
well as improvements in the database disaster recovery arena. This article -
the final in a series - focuses on new functionalities provided by the Data
Guard and LogMiner tool suites.
The previous
article in this series delved into the new Oracle 10g Logical Flashback
features that make it simpler for a DBA to recover database objects within
ranges of prior versions, transactions, and logical operations - even when a
table has been dropped in error. Oracle 10g also added some welcome enhancements
to two sets of utilities that can be invaluable on a DBA's toolbelt: LogMiner,
a set of utilities for mining redo and undo information from online and
archived redo logs; and Data Guard, a toolset that allows a DBA to
create a highly available clone of a production database for disaster recovery
purposes.
LogMiner Enhancements
The LogMiner tool suite lets a DBA scan through online redo
logs or archived redo logs to obtain actual DML SQL statements that have been
issued to the database server to create the redo change entries. LogMiner can
also return the SQL statements needed to undo the DML that has been issued.
Though I have had limited opportunity to utilize LogMiner in the past, almost
every time I have used it, it has come through with the information I needed to
help our development team or application users to recover from a potentially
disastrous situation.
However, LogMiner did have a few drawbacks: Even with the
Oracle Enterprise Manager user interface, it could take some wrangling to get
LogMiner to return the information needed for recovery. In addition, it did not
support retrieval of data from columns with Large Object (LOB) datatypes. The
good news is that Oracle 10g has enhanced the LogMiner tool suite to overcome
many of these issues:
Automated Determination of Needed Log Files. Prior to
Oracle 10g, one of the more tedious tasks before initiating a LogMiner
operation was to determine which archived redo logs were appropriate targets
for mining. Since one of our production databases had rather large redo log
file members (128MB), this was important to limit the amount of server
resources - especially processor resources - needed to scan the log files.
I usually handled this by querying the V$ARCHIVED_LOG view to
determine which archived redo log files might fulfill my LogMiner query based
on their start and end time periods, and then used the DBMS_LOGMNR.ADD_LOGFILE procedure to query
against just those log files. Oracle 10g has greatly simplified this by
scanning the control file of the target database to determine which redo logs
will fulfill the requested timeframe or SCN range.
Listing
4.1 shows an example of the new CONTINUOUS_MINE directive of procedure DBMS_LOGMNR.START
that directs Oracle to determine what log files are needed based on the ranges
specified. It also illustrates that the DBMS_LOGMNR.START procedure can be executed
multiple times within a LogMiner session to effectively limit the range of log
files required for the mining request
How Pretty Is My SQL? As nice as it is to be able to
see the actual SQL redo and undo statements, I have found myself frustrated by
how difficult it can be to parse them visually or for re-execution. In addition
to the existing directive, NO_SQL_DELIMITER that removes semicolons from the
final display, Oracle 10g also adds a new directive, PRINT_PRETTY_SQL that formats the SQL into a more legible
format. In addition, another new directive, NO_ROWID_IN_STMT, will omit the ROWID clause from
the reconstructed SQL when the DBA intends to reissue the generated SQL -
especially when it is going to be executed against a different database with
different ROWIDs. See Listing
4.2 for examples of these directives.
Expanded Support for Additional Datatypes. LogMiner
now supports retrieval of SQL Redo and Undo information for Large Objects (LOBs)
including multibyte CLOBs and NCLOBS. Data stored in Index-Organized Tables (IOTs)
is now also retrievable, so long as the IOT does not contain a LOB.
Storing the LogMiner Data Dictionary in Redo Logs.
LogMiner needs to have access to the database's data dictionary so that it can
make sense of the redo entries stored in the log files. Prior to Oracle 10g,
only two options were available. The database's data dictionary can be used as
long as the database instance is accessible. Another option is to store the
LogMiner data dictionary in a flat file created by the DBMS_LOGMNR_D.BUILD procedure. This offers the
advantage of being able to transport the data dictionary flat file and copies
of the database's log files to another, possibly more powerful or more
available server for LogMiner analysis. However, this option does take some
extra time and consumes a lot of resources while the flat file is created.
Oracle 10g now offers a melding of these two options: the
capability to store the LogMiner data dictionary within the active
database's redo log files. The advantage to this approach is that the data
dictionary listing is guaranteed to be consistent, and it is faster than
creating the flat file version of the data dictionary. The resulting log files
can then be specified as the source of the LogMiner data dictionary during
mining operations. Listing
4.3 shows an example of how to implement this option.