Striking Gold with LogMiner – Part 1: Getting Started

Oracle’s LogMiner utility has been around for several years
now, and it originally appeared as a new feature (as opposed to being an
improvement of something else). Each major release of Oracle typically ushers
in a new set of features, some of which are actually useful and can make your
job as a DBA even easier. The “buzz” on LogMiner has worn off. Part of its
initial allure was its ability to help DBAs recover data by generating SQL
“undo” statements from the online (and archived) redo logs. Part of what made LogMiner
undesirable to use is its interface and reliance on executing a package
containing a laundry list of options.

The purpose of this series is to help demystify some of the
procedures, illustrate some examples of using LogMiner, and to recommend cases where
LogMiner should be a DBA’s primary tool for recovering lost data.

Let’s look at an outline of some setup and initialization
steps you can use to make LogMiner more palatable. Using LogMiner involves (or
can involve) a one time setup, a command to start the utility or session, SQL
queries you perform to capture the “redo” or “undo” statements, and a command
to end the session. Data is stored in dynamic views (“v$” views), and the query
to get the SQL statements is actually the easiest part of using LogMiner.

Shown below is a summary table of the steps involved.

Create a

Dictionary

Add logfiles (online or
archive)

Start the session

End the session

Use a flat file; need
utl_file_dir set for this

Use the redo logs; need to
be in archivelog mode

Use an online catalog

1st one is new

2nd and later
are “add”

Files can also be removed

Only need one from each
group

Use option
print_pretty_sql for better readability

Specify a time range or SCNs

End is one statement with
no options

Creating a dictionary

LogMiner uses a dictionary to “translate internal object
identifiers and datatypes to object names and external data formats.” (Oracle9i
Database Administrator’s Guide) You have a choice of three types of
dictionaries for LogMiner. To use the flat file version (which I use later in
the article), you will need the UTL_FILE_DIR parameter set.

Having picked the flat file version for my dictionary, what
does it take to set the parameter? Trivia question: if you are using an SPFILE
(or not), can you dynamically set the UTL_FILE_DIR parameter? One way to find
out is to look at what is displayed in the Enterprise Manager Console
“Configuration” window.

In the highlighted line, there is no checkmark in the
“Dynamic” column, so if “utl_file_dir” is not already set, you will have to set
it and bounce the instance.

Before running the DBMS_LOGMNR utility, you may need to
compile the packages. Instead of compiling each invalid object one at a time,
you can make use of the utlrp.sql script.


SQL> select object_name, status
from dba_objects
where object_name like ‘%LOG%’
and object_type = ‘PACKAGE’;

OBJECT_NAME STATUS
—————————— ——-
DBMS_INTERNAL_LOGSTDBY INVALID
DBMS_LOGMNR INVALID
DBMS_LOGMNR_D INVALID
DBMS_LOGMNR_INTERNAL INVALID
DBMS_LOGMNR_LOGREP_DICT INVALID
DBMS_LOGMNR_SESSION INVALID
DBMS_LOGREP_DEF_PROC INVALID
DBMS_LOGREP_DEF_PROC_UTL INVALID
DBMS_LOGREP_EXP INVALID
DBMS_LOGREP_IMP INVALID
DBMS_LOGREP_IMP_INTERNAL INVALID
DBMS_LOGREP_UTIL INVALID
DBMS_LOGSTDBY VALID
LOGMNR_DICT_CACHE INVALID
(a couple of others are not shown)

After your DBMS_LOGMNR package is ready to go, you can
create the LogMiner dictionary. Two recommendations from Oracle are to use a
separate tablespace for LogMiner objects and to enable supplemental logging.
The examples for these recommendations are shown below along with the “create
dictionary” statement. Note that the package used to create the dictionary is
DBMS_LOGMNR_D.


SQL> execute dbms_logmnr_d.build (‘dictionary.ora’, –
> ‘c:\ora9i\admin\db00\file_dir’, –
> options => dbms_logmnr_d.store_in_flat_file);

PL/SQL procedure successfully completed.

SQL> execute dbms_logmnr_d.set_tablespace(‘logmnr_ts’);

PL/SQL procedure successfully completed.

SQL> alter database add supplemental log data
2 (primary key, unique index) columns;

Database altered.

Adding logfiles

With these steps created, we are now ready to add logfiles
to the collection of files we want to be able to examine via SQL statements.
The steps to add logfiles are very easy, and you can add and remove them at
will. You can also use a shortcut to add files (no need to specify the
“OPTIONS” parameter). Track down the names of your log files (not using archive
logs here) with


SQL> select * from v$logfile;

GROUP# STATUS TYPE MEMBER
——- ——- ——- ———————————–
3 ONLINE C:\ORA9I\ORADATA\DB00\REDO03.LOG
2 ONLINE C:\ORA9I\ORADATA\DB00\REDO02.LOG
1 ONLINE C:\ORA9I\ORADATA\DB00\REDO01.LOG

With these file names in hand, use the ADD_LOGFILE procedure
with the NEW option for the first one, and an optional ADDFILE for subsequent
log files.


SQL> exec dbms_logmnr.add_logfile( –
> logfilename => ‘C:\ORA9I\ORADATA\DB00\REDO03.LOG’, –
> options => dbms_logmnr.new);

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.add_logfile( –
> logfilename => ‘C:\ORA9I\ORADATA\DB00\REDO02.LOG’, –
> options => dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.add_logfile( –
> logfilename => ‘C:\ORA9I\ORADATA\DB00\REDO01.LOG’, –
> options => dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

Log files are easily removed:

SQL> exec dbms_logmnr.add_logfile( –
> logfilename => ‘C:\ORA9I\ORADATA\DB00\REDO01.LOG’, –
> options => dbms_logmnr.removefile);

PL/SQL procedure successfully completed.

and easily replaced:


SQL> exec dbms_logmnr.add_logfile( –
> logfilename => ‘C:\ORA9I\ORADATA\DB00\REDO01.LOG’);

PL/SQL procedure successfully completed.

Steve Callan
Steve Callan
Steve is an Oracle DBA (OCP 8i and 9i)/developer working in Denver. His Oracle experience also includes Forms and Reports, Oracle9iAS and Oracle9iDS.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles