Database Performance Tuning: Statistical Gathering

Some good to great database products exist to help you tune your database, while others are not so good; but regardless of the tool, it is imperative that you understand some form of tuning methodology first.

There are many database products out there to help you tune your database. Some are good to great and others are not so good. However, regardless of the tool you use, or are looking to purchase, an understanding of some form of tuning methodology should be achieved first. After all, if you don’t have a method in mind for tuning your database, being offered a wiz-bang tuning product that promises to automatically tune your database at the touch of a button might just win you over, AND waste your money.

As anyone who knows what they are doing will tell you, the tuning process to achieve optimal database performance is an iterative process that is often fought one battle at a time, constantly removing the newest and greatest bottleneck on your system. In addition, we must understand that there is always some form of bottleneck on the system. If this wasn’t the case then we could run all our databases on a 1 CPU box with 256K of memory. For this article, we will continue our walk through the Oracle 2 Day + Performance Tuning Guide, chapter 2 specifically and titled Oracle Database Performance Method.

Before beginning this chapter, there are probably a few terms that you should understand, especially if you’re new to performance tuning.

Throughput – this is the amount of work that your database can complete in a designated timeframe. This should not be confused with workload, as workload is the amount of work requested of a database.

Response time – simply is the amount of time it takes to complete a specific amount of work. Work can be as simple as a single SQL statement or a designated unit of work like a report that may contain many SQL statements.

The lack of throughput and/or response time is what many would consider poor performance and initiates a tuning effort. For that reason Oracle has come up with their own, and a good one, Oracle database performance method that includes gathering statistics with their Automatic Workload Repository, using the Oracle Performance Method, and identifying common performance problems, all to help us tune our databases.

Gathering statistics is probably the easiest of these three areas and provide us, Oracle tools, and third party tuning tools with the ammunition required to suggest how we might begin to tune our databases, providing us with valuable insight into the loads, resources, and timing associated database usages. For Oracle, it is the Automatic Workload Repository (AWR) that is used to automate the database statistic gathering. AWR will gather statistics every hour, by default, and create an AWR snapshot. The AWR snapshot is nothing more than a point in time image of what the statistic counts look like and can be used to aggregate, compare, etc. these statistics over  greater periods of time or across periods of time, allowing us to make performance assumptions and develop tuning strategies.

Two initialization parameters are important when talking statistical gathering for AWR:

STATISTICS_LEVEL – this parameter tells AWR to enable statistics gathering and at what level. TYPICAL, the default setting will collect all the major statistics and is typically enough for most environments. When set to ALL, some additional statistics will be collected such as timed OS statistics and plan execution statistics. The third setting of BASIC will disable most relevant statistics.

CONTROL_MANAGEMENT_PACK_ACCESS – The default setting DIAGNOSTIC+TUNING or DIAGNOSTIC will enable the automatic database diagnostic monitoring. Setting to NONE will disable.

For AWR, there are different types of statistics that are collected. These statistics types include:

Time Model Statistics – this is the time spent in the database and categorized by operation type. For instance, there is a statistic called database time (DB time) that represents the total time in the database for calls for sessions. This database time is often referred to as the total instance workload because it is the total time all requests have been connected to the database and either consuming resources or waiting for resources, just not idle.

Wait Event Statistics – These statistics are usually what people would consider as the bottleneck in a database. This is because wait events tell us what resource any particular session, or accumulated for the system, is causing the biggest wait for completing a unit of work. It is these wait events that help reveal what problems might be impacting performance.

Session and System Statistics – I hinted at this under the wait event statistics but there are two levels of statistics that are kept within AWR. Some of the statistics are at the session level so we can interrogate what might have happened for individual database connections and there are system level statistics, like the database time (DB time) above that give us an indication on the health of the database as a whole.

Active Session History Statistics – Otherwise known as ASH, active session history is just what it sounds like, a sampling of session activity, sampled every second for active sessions, and stored in a circular buffer in the SGA. There are two key concepts here that must be understood. The first is that this is historical information, may or may not relate to a current active session, and that there is a circular buffer which means that the amount (across a time period) of statistical data within ASH is dependent on the database activity that might cycle out statistics quicker than a non-active database.

High-Load SQL Statistics – Everything related to performance tuning is rooted in SQL statements. Without SQL statements, we would have no workload. The high-load SQL statistics represent those SQL statements that are consuming the most resources and cause the highest load on the system; allowing us to get directly to those SQL statements that have the highest elapsed times and CPU usage.

The method of performance tuning taken must be firmly rooted in statistical gathering. Without statistics, and the ability to relate them to workloads we would not be able to tune our databases. Only after accumulating statistics can we begin to create a methodology of tuning databases. Stick around for the next installment of the 2 Day + Performance Tuning Guide to see how we can use Oracle’s performance methodologies.

» See All Articles by Columnist James Koopmann

James Koopmann
James Koopmann
James Koopmann has fourteen years of database design, development and performance tuning experience. In addition, he has extensive database administration experience in Oracle and other relational databases in production environments, specializing in performance tuning of database engines and SQL based applications. Koopmann is an accomplished author with several technical papers in various Oracle related publications such as Oracle Magazine, Oracle Professional and SQL>UPDATE_RMOUG. He is a featured author and database expert for DatabaseJournal, a member of the editorial review committee for Select Journal (The Magazine for the International Oracle Users Group), an Oracle Certified Professional DBA and noted speaker at local Oracle User Groups around the country.

Latest Articles