Exposure to a database tuning utility
The trend from Oracle is that most, if not all, of a DBA's
interface with managed databases/instances is heading towards GUIs. This is
especially true if you consider how Oracle Applications, Application Server,
and many aspects of Oracle10g are concerned. You have heard about it, now you
can see it via a course of instruction, and that is the OEM Performance
Manager. STATSPACK, AUTOTRACE, the "stat" utilities - you've been
there, done that, and aren't you tired of trying to remember all of the views
involved and what the output means?
Think of the evolution of these tools like the relationship
between a slide rule and today's very inexpensive/hugely powerful graphing
calculators. We are way past the days of entering "7734" and reading
it upside down. Now, you can show the graph of the function and its first
derivative which shows just how fast your rate of change to entering "7734"
is increasing. Do you need a little shove or push to start using the GUI tools?
When you see them, you are probably not going to want to go back to the "purist"
way of doing business.
Go through the exercises based on the "old"
utilities to gain a better understanding of what you see in Performance
Manager. If you are totally new to performance tuning, you will not appreciate
the amount of work you would have to do by hand to get the same data--in a
text-based format--and try to interpret it graphically. On the other hand,
there are many future DBAs who will grow up using only the GUI interface, so
don't feel guilty about being able to rattle off the intersection column for
the V$this and V$that dynamic views.
With one click, you get all of what is shown below. Take
advantage of the "manual" lessons so you can translate that output
with what Performance Manager does.
As another example, you can use the "select * from dba_blockers;"
and "select * from dba_waiters;" statements via SQL*Plus to see:
Or take advantage of the one-click-away chart shown below:
Coverage of other topics
The Thomson book devotes two sections to LogMiner and the
DBMS_REPAIR utilities. In comparison, the Sybex series treats these topics in
its Backup and Recovery chapters. Why the difference? With LogMiner, it depends
on the purpose for using it in the first place. As mentioned in the Database Administrator
Guide, one of the potential uses for data stored in redo logs is shown in bold
Potential Uses for Data Stored in Redo Logs
All changes made to user data or to the data dictionary are
recorded in the Oracle redo logs. Therefore, redo logs contain all the
necessary information to perform recovery operations. Because redo log data is
often kept in archived files, the data is already available. To ensure that
redo logs contain useful information, you should enable at least minimal
The following are some of the potential
uses for data contained in redo logs:
Pinpointing when a logical corruption to
a database, such as errors made at the application level, may have begun. An
example of an error made at the application level could be if a user mistakenly
updated a database to give all employees 100 percent salary increases rather
than 10 percent increases. It is important to know exactly when corruption
began so that you know when to initiate time-based or change-based recovery.
This enables you to restore the database to the state it was in just before
Detecting and whenever possible,
correcting user error, which is a more likely scenario than logical corruption.
User errors include deleting the wrong rows because of incorrect values in a
WHERE clause, updating rows with incorrect values, dropping the wrong index,
and so forth.
Determining what actions you would have
to take to perform fine-grained recovery at the transaction level. If you fully
understand and take into account existing dependencies, it may be possible to
perform a table-based undo operation to roll back a set of changes. Normally
you would have to restore the table to its previous state, and then apply an
archived redo log to roll it forward.
Performance tuning and capacity planning through trend
analysis. You can determine which tables get the most updates and inserts. That
information provides a historical perspective on disk access statistics, which
can be used for tuning purposes.
Performing post-auditing. The redo logs
contain all the information necessary to track any DML and DDL statements
executed on the database, the order in which they were executed, and who
So, that justifies coverage of LogMiner in this particular
case. DBMS_REPAIR, on the other hand, is more commonly viewed as a Backup and
Recovery topic as opposed to having more to do with performance tuning.
What could be improved in future editions
I would like to see future editions of this book/course
include sections on indexing and the CBO, and to take advantage of the new
sample schema Oracle provides with release 9i. Knowledge of different indexing
techniques, combined with how the CBO uses (or does not use) an index is
essential in performing performance tuning. The sample schema can be used, as
there is enough of it to see a noticeable difference in query execution times
when an appropriate type of index is used.
For the cost and amount of material covered, Thomson's
Course Technology Oracle9i Performance Tuning book is a good deal. If you are
new to performance tuning, this book provides an excellent combination of cost
versus coverage when compared to "brand name" training providers and
their typical rate of $500 per day. Either way, you are not going to walk away
as an expert tuner, but if you want an inexpensive, broad coverage exposure to
many performance tuning concepts, you are not going to go wrong with the
Thomson book. There are plenty of other books and training programs you can
read or attend after this one. Having a firm grasp of the concepts presented in
this book will benefit you when the same material is covered in a more in-depth
approach at a much higher dollar per day rate.
See All Articles by Columnist Steve Callan