Database Journal
MS SQL Oracle DB2 Access MySQL PostgreSQL Sybase PHP SQL Etc SQL Scripts & Samples Tips Database Forum Rss Feed

» Database Journal Home
» Database Articles
» Database Tutorials
MS Access
SQL Scripts & Samples
» Database Forum
» Slideshows
Free Newsletters:

News Via RSS Feed

Rss Feed

Database Journal |DBA Support |SQLCourse |SQLCourse2

Featured Database Articles


Posted May 1, 2008

Create your own reports with Oracle's Snapshot Information

By James Koopmann


If you are tired of running the same old Oracle AWR reports, not getting to the data quickly, or just need to write your own, then this article will get you started.

Oracle’s Automatic Workload Repository (AWR), through snapshots, collects and maintains a set of statistics that can be used for performance tuning through the use of canned reports. Wow! That was a lot of information in one sentence and it is assumed that we all are knowledgeable with AWR, snapshots, and the underlying structures. We are only going to touch AWR very lightly in this article but probably in a way you haven’t done before. First, AWR is nothing more than, as the name states, a repository that holds statistical information. This information is collected through a snapshot (point-in-time collection) process that inserts rows into structures that can be queried at a later date for reporting.

As a default to the Oracle environment, you typically look at AWR data/reports through Oracle’s Enterprise Manager. If you are a command line junkie, you might use the provided AWR report scripts such as one of the following—each of which will ask you information like the beginning and ending snapshop IDs and the instance against which you want to report.

$ORACLE_HOME/rdbms/admin/awrrpt.sql - Standard AWR report
$ORACLE_HOME/rdbms/admin/awrrpti.sql - 
 AWR report for a specific instance
$ORACLE_HOME/rdbms/admin/awrsqrpt.sql - 
 AWR report for a SQL statement

You should have noticed the similarity in these script files, there are more, as they all are in the $ORACLE_HOME/rdbms/admin directory and they all start with awr. You can run these all at anytime and I encourage you to do this. Mostly this is where we now venture off in our attempts to imitate some of their behavior. Please do note that some of these are to “only” be run under the supervision of Oracle Support. Just be cautious and read the script before running it.

Since I plan to take you through a complete script, have patience. The first thing we need to do is set up some general .sql script environmental behaviors. So, the first part of our script will be:

set linesize 132
set echo     off
set feedback off
set heading  off
set linesize 132
set pagesize 55
set verify   off

Now that we’ve got that out of the way, lets begin by asking the number of days to look for snapshots. We can do this with the following SQL, which joins the DBA_HIST_DATABASE_INSTANCE and DBA_HIST_SNAPSHOT views—showing historical information on the snapshots in the Workload Repository. We obviously need to join these tables on the dbid, instance_number, and, the important part, startup_time. We also need to make sure that we only bring back snapshots that are newer than the number of days back specified by the user, by comparing the time of the actual snapshot (end_interval_time). Please note that this script will output a status of ‘**db restart**’ for those times that the database was down and unavailable. This is very important as it shows us those times that Oracle was not collecting statistics (the database was down) and more importantly, the statistic counters were zeroed. We can report on a bounce condition if the startup_time and begin_interval_time are the same.

prompt Enter the number of days to look for snapshot IDs
prompt ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
select dhdi.instance_name,
       to_char(dhs.begin_interval_time,'MM/DD/YYYY:HH24:MI') begin_snap_time,
       to_char(dhs.end_interval_time,'MM/DD/YYYY:HH24:MI')   end_snap_time,
       decode(dhs.startup_time,dhs.begin_interval_time,'**db restart**',null) db_bounce
  from dba_hist_snapshot dhs,
       dba_hist_database_instance dhdi
 where dhdi.dbid             = dhs.dbid
   and dhdi.instance_number  = dhs.instance_number
   and dhdi.startup_time     = dhs.startup_time
   and dhs.end_interval_time >= to_date(sysdate - &&num_days_back)
 order by db_name, instance_name, snap_id;





snapshot ID


Database ID for the snapshot


Instance number


Time the instance was started


Beginning time of the snapshot


The time (ending) the snapshot was taken for the given interval





Database ID


Instance number


Time the instance was started


Database Name


Instance Name

The next part of the script does nothing more than ask you to look back at the output from the previous SQL and select a beginning snapshot ID and an ending snapshot ID. These values will be kept in the variable beg_snap_id and end_snap_id respectively.

prompt Enter Begining snapshot ID
prompt ^^^^^^^^^^^^^^^^^^^^^^^^^^
prompt &&beg_snap_id
prompt Enter Ending   snapshot ID
prompt ^^^^^^^^^^^^^^^^^^^^^^^^^^
prompt &&end_snap_id

I always like to give my reports a unique name so the next part of the script will provide that granularity. Notice that I preface with the instance_name and have the beg_snap_id and end_snap_id in the name. Also, I put my results in a lower directory named LST. Please name your report something meaningful.

set termout off
column rpt new_value rpt
select instance_name||'_wrh_sample_'||&&beg_snap_id||'_'||&&end_snap_id||'.LST' rpt from v$instance;
set termout on
prompt ^^^^^^^^^^^^^
prompt Report Name : ../LST/&&rpt
prompt ^^^^^^^^^^^^^
spool ../LST/&&rpt

Now pull some statistics out of the Workload Repository History tables—for the beginning snap ID. We are all familiar with the V$SYSSTAT view, and the Workload Repository History has one very similar named WRH$_SYSSTAT. Join this table with WRH$_STAT_NAME for the statistic names and we can easily extract any collected statistic for our snapshot ID. In this example, I am pulling out the values for physical reads and writes for the beginning snapshot ID. These are stored in the variable tbr1 and tbw1 respectively. Probably one of the funniest and confusing parts of this script is the ‘SUM(DECODE’ part. All this does is, as rows are processed/scanned, add the value to the variable if the statistic name is equal to the name in the DECODE. There is only one row for each statistic and I could have written out a simple SELECT..FROM..WHERE for each statistic name, but I like having just one SQL statement and scan the table once as opposed to once for every statistic.

column db_bounce new_value db_bounce
column tbr1      new_value tbr1
column tbw1      new_value tbw1
set termout off
sum(decode(stat_name,'physical read total bytes',value,0)) tbr1,
sum(decode(stat_name,'physical write total bytes',value,0)) tbw1
  FROM wrh$_sysstat, wrh$_stat_name
 WHERE wrh$_sysstat.stat_id = wrh$_stat_name.stat_id
   AND wrh$_sysstat.snap_id = &&beg_snap_id ;

Check the bounce flag and see if we should zero out the beginning variables. Later in this script, since Oracle accumulates statistics over time and zeroes them out if a bounce occurs, we need to zero out the beginning variables so as not to subtract the beginning variables from the ending variables and come up with a negative or irrelevant number.

select decode(startup_time,begin_interval_time,1,0) db_bounce
  from dba_hist_snapshot
 where snap_id = &&end_snap_id;
decode(&&db_bounce,1,0,&&tbr1) tbr1,
decode(&&db_bounce,1,0,&&tbw1) tbw1
  FROM dual;
set termout on

This is just to print out the beginning variables to the screen—nice and pretty like.

prompt ^^^^^^^^^^^^
prompt First Sample
prompt ^^^^^^^^^^^^
prompt Total Bytes Read      : &&tbr1
prompt Total Bytes Written   : &&tbw1

Now it is time to get the values for our same statistics at the time of the second snapshot. This time the script asks for the end_snap_id and relies upon the user to enter in an ID from the very first output of all snapshot IDs earlier in the script. This SQL is exactly the same as the one above for the beg_snap_id.

column tbr2 new_value tbr2
column tbw2 new_value tbw2
set termout off
sum(decode(stat_name,'physical read total bytes',value,0)) tbr2,
sum(decode(stat_name,'physical write total bytes',value,0)) tbw2
  FROM wrh$_sysstat, wrh$_stat_name
 WHERE wrh$_sysstat.stat_id = wrh$_stat_name.stat_id
   AND wrh$_sysstat.snap_id = &&end_snap_id;
set termout on

The ending variables extracted need no check against the bounce of the database so we can just print them out on the screen.

prompt ^^^^^^^^^^^^^
prompt Second Sample
prompt ^^^^^^^^^^^^^
prompt Total Bytes Read      : &&tbr2
prompt Total Bytes Written   : &&tbw2

Now we just subtract the beginning statistical values from the ending values to get the amount between the two snapshots. Then print the results to the screen.

prompt ^^^^^^^^^
prompt Results :
prompt ^^^^^^^^^
column tr  new_value tr
column tw  new_value tw
set termout off
ROUND(&&tbr2-&&tbr1) tr,
ROUND(&&tbw2-&&tbw1) tw
FROM dual;
set termout on
prompt Total Bytes Read      = &&tr
prompt Total Bytes Written   = &&tw

A little house cleaning to clear out the variables we used.

spool off
undefine num_days_back
undefine beg_snap_id
undefine end_snap_id
undefine db_bounce

This may seem like a lot of work to subtract a few statistical values, but the beauty of this is that you can extend this simple, ok semi-simple script, to do a lot of work around the selected snapshot IDs. Most of which you could not do otherwise by any cleaver single SQL statement. Moreover, the most important part is that the existing AWR reports do not report on every type of statistical information you need to tune your database. I have often times found myself punching a calculator to find some metric. No longer do I, or you, have to do this. Create a script such as this, put in your own statistic names, and do the calculations at the end. Happy Reporting.

» See All Articles by Columnist James Koopmann

Oracle Archives

Latest Forum Threads
Oracle Forum
Topic By Replies Updated
Oracle Data Mining: Classification jan.hasller 0 July 5th, 07:19 AM
Find duplicates - Unique IDs Lava 5 July 2nd, 08:30 AM
no matching unique or primary key rcanter 1 April 25th, 12:32 PM
Update values of one table based on condition of values in other table using Trigger Gladiator 3 February 29th, 06:01 PM