Statistics are at the core of determining whether your performance tuning efforts are working or not. James Koopmann dives into performance methodologies, specifically Oracle’s performance method, to give us a better understanding of how to tune a database with statistics.
In my last article, I started to look at methods of performance tuning but it quickly turned into a discussion on throughput, response time, and some statistics that are available for us to use. This is because statistics are at the core of determining whether your tuning efforts are working or if you still have a system that is suffering from performance problems. This article takes a little bit deeper dive into performance methodologies, specifically Oracle’s performance method, so that we might begin to understand how we commence to tune a database, with statistics of course.
The Oracle performance method is rooted in finding those bottlenecks (performance problems) that are having issues (can be seen by using statistics). This is probably a good place to state that the use of statistics can be done either proactively or reactively. Proactively is typically where we would like to be, having all the time in the world to monitor our database and detect issues as they begin to rear their ugly head. While it is true that we can predict and plan for the future there are many more instances, that I’ve come up against, that are of a more reactive nature; causing us enormous amounts of stress only to find that something was deployed within a production environment without proper QA.
Regardless if a performance issue has been detected proactively or reactively the steps to prepare ourselves for a tuning opportunity takes somewhat of a similar form. I have always found, and Oracle agrees, that talking with a user and getting feedback on the issue they are experiencing is critical for determining the scope and goals for improvement. I often give the example, and I think it is a good one, of a clerk who has to produce a report every morning. This clerk will typically press the submit button, get up and get a cup of coffee, chat/say hello to a colleague or two, and then proceed back to his desk to read the report. Under normal circumstances, if he was very chatty, he gets back to his desk in 10 to 15 minutes, with the report ready to be read. However, one day, during flu season let say, the clerk isn’t as chatty because he has a sore throat. Clearly, you can see that this clerk may now complain about the report not running as it should and have a perceived performance problem. This is why talking to users is so important. Being thorough about the types of questions you ask is even more important. Not only are questions about the computer system important but habits and specific tasks of the day could lead to some very interesting results, namely no new performance goals and something you don’t have to worry about. Now that being said, we could have easily inserted an angry boss that doesn’t like all the chatter that goes on first thing in the morning. In this case, the clerk might decide to complain and say that they can’t do their work anyway because the report doesn’t get produced for 10 minutes. Now we have a real performance goal and must do something to accomplish it.
Part of preparing to tune the database always involves checking all systems involved in the performance of an application, query, tools, whatever. Making sure there are no red lights on the front of storage arrays (no hardware issues) as well as ensuring that the operating system resources are not overused or saturated. This may be difficult for a DBA to perform if they are not familiar with some of the environment commands. This is where we should have a list of questions we can quickly ask our system administrators, network administrators, and fellow DBAs.
While the first preparatory step (question users) is semi non-technical you can see that we turn our focus to statistics and technical details very quickly. After getting a glimpse of the system level statistics (operating system level statistics), or at the same time, we can begin to turn our focus to the database. As a preparatory step to database tuning we must ensure, (hopefully before a performance bottleneck is noticed), that STATISTICS_LEVEL has been properly set. I hope that it is set to at least TYPICAL but from an Oracle performance tuning method, it needs to be set to ALL to enable the automatic performance tuning features as well as AWR and ADDM. In that same regard, Oracle would like you to ensure that the CONTROL_MANAGEMENT_PACK_ACCESS parameter is set to DIAGNOSTIC+TUNING or DIAGNOSTIC to enable ADDM.
If we have done the preparatory steps as outlined above, it is now time to actually tune the database. The Oracle method of tuning now dictates that we actually have three opportunities to tune the database, two of which we have already mentioned. So we can tune the database:
1. proactively and on a regular basis – proactively and regular means nothing more than having a set of tools that you can use every day, either by submitting requests manually or having them done automatically for you. More than likely the running of these scans/reports will produce a list of items that you can work on, helping you to fine-tune and reach performance goals. ADDM, from the Oracle methodology perspective, is a nice tool that can be used to detect and report on performance problems as well as automatically detecting some of the most common performance problems found in Oracle databases.
Also, remember that proactively could mean you sitting at a console and issuing SQL statements to query internal database statistics or using Enterprise Manager to drill down into expected problem areas.
Proactive really means that you are just doing something now to help determine if the database is not performing well.
2. reactively after we are told users or applications are having issues – the opposite of proactive is reactive and reactively fighting performance problems is something we all wish to avoid; it’s too stressful. Regardless, the Oracle methodology dictates that we still make use of the ADDM reports but leans heavily on historical analysis, possibly using the Active Session history (ASH) reports as they report on past activity.
Often times, just because bad performance tends to linger (or else it wouldn’t be recognizable) a reactive performance tuning attempt often turns into a proactive tuning drill, trying to catch something that is currently having issues or will soon have issues again.
3. identifying and tuning resource-hungry SQL statements – ADDM is often at the center of Oracle’s method of tuning, and tuning bad SQL is no different. ADDM can quickly be used to identify high-load SQL statements (those that are causing the most problems/contention within the database). Tuning these bad SQL statements can be somewhat of a black-art but there is help in the form of the SQL Tuning Advisor. Tuning SQL should be looked at from the standpoint of optimizing data access paths and limiting I/O.
I said it in the last article, said it at the beginning of this one, and will say it again: tuning a database is firmly rooted in statistical gathering. Without statistics, and the ability to relate them to workloads we would not be able to tune our databases. Only after accumulating statistics can we begin to create a methodology of tuning databases. The Oracle method of performance tuning is no different, just the steps and the tool.