How to Do SQL Server Performance Trend Analysis Part 1: Using Performance Monitor to Log Data

In this four-part tutorial, you will learn how to use NT Server 4.0’s Performance Monitor and Microsoft Excel to monitor and analyze SQL Server performance. You will also learn how to use a SQL Server database to store your Performance Monitor logs. This tutorial assumes that you already know the basics of using Performance Monitor, Excel, and of course, SQL Server.

This is part one of the four-part tutorial. This part covers how to use Performance Monitor to capture log data. The second part discusses how to use SQL Server to store Performance Monitor log data. Part three will show you how to use Microsoft Excel to analyze the Performance Monitor Data. And part four will show you how to interpret your results.

 

Background

Before we begin this tutorial, let’s take a moment to first discuss why we want to monitor SQL Server performance. As you probably already know, SQL Server is very good at tuning itself. It has the ability to monitor itself, and through a feedback loop, it knows how to internally adjust and tune itself so that it keeps running efficiently, even when external events, such as the number of user connections or the amount of available RAM, change over time.

But as we all know, SQL Server’s ability to self-tune is not perfect and does not take into consideration every possible aspect that affects its performance. As a DBA, we need to help SQL Server along, providing it the resources it needs for it to do a good job serving up data.

As a good DBA, we don’t want to find out from our users that SQL Server is having a performance problem. Instead, we want to be proactive and catch performance problems before they arise. That is what NT Server 4.0’s Performance Monitor can help us do. It is a tool that allows us to monitor what is going on with our SQL Server, and to provide us the information we need to make decisions on how to best tune our SQL Servers.

Performance Monitor is an important tool, because it not only provides us with information on how SQL Server is performing, but it also lets us know how NT 4.0 Server is doing, which of course directly affects SQL Server’s performance.

With this in mind, it is the goal of this tutorial is to show you how you can use Performance Monitor to capture NT and SQL Server-related data, and then analyze it using both Performance Monitor itself and Microsoft Excel using trend analysis. And because we need a place to store our Performance Monitor Logs, you will learn how to store them in a SQL Server database.

 

Performance Monitoring Never Stops

Monitoring the performance of NT 4.0 Server and SQL Server is not a one-time event. As a DBA, you should be constantly monitoring all of your SQL Server’s performance. This is the case whether you have one server, or a hundred.

As described later in this tutorial, you should be using Performance Monitor to capture performance counters and to store them in a log. Periodically, this log should be exported to a storage area for long term keeping. This storage area could be a folder on a network holding the raw Performance Monitor log files, a SQL Server database, an Access database, or an Excel spreadsheet, whichever best meets your needs. In our case, we will be using SQL Server to store this data.

I have chosen SQL Server for several reasons. First, it is handily available. Second, we should know how to use it. Third, it can handle all our storage needs. Fourth, it is easy to import Performance Monitor logs into SQL Server, and easy to export the log data to Excel.

Once our Performance Monitor log data is in a SQL Server database, we can then choose to export it to Excel for detailed analysis. And since the data is in SQL Server, we have the full capability of SQL Server to only SELECT and analyze the data we want to analyze in Excel.

The main reason we want to store our Performance Monitor logs is to be able to identify trends in performance. Looking at Performance Monitor logs for a day, or even a week, can’t really tell us if we will need to add CPUs, or more disk arrays, to our server next year. What we need are months of data, perhaps years of data, that we can use to make reliable predictions on what our SQL Server resource needs will be in the future.

So one of the biggest benefits of long-term NT Server 4.0 and SQL Server monitoring is to be able to predict our server needs in the future. While this important, we also need these same logs to help us identify current performance bottlenecks and other potential performance-related problems. The more information we have at our disposal, the better decisions we can make.

 

Selecting a Performance Monitor View

Performance Monitor has four operational “views”. They include:

  • Chart View: For performance counter viewing and analysis

  • Alert View: For creating performance-based alerts

  • Report View: For viewing data performance data in a tabular format

  • Log View: For logging performance data to a log file

Our focus here will be on the Log View, which allows us to select the Performance Monitor counters we want collect and log, along with how often we want to collect the data.

 

Determining Where to Run Performance Monitor

The first step to using Performance Monitor is to decide where it will be located when it is running. You can locate it on the same server that it is monitoring, or you can locate it on any NT 4.0 Server or Workstation. If you run Performance Monitor from the same server you are monitoring, this will affect the results, although not significantly on most servers.

I prefer to run Performance Monitor from NT 4.0 Workstation, connecting to the monitored server over the network. This will move most of the load off of the server being monitored, with the very minor exception of some network activity.

Another option is to use the service-based version of Performance Monitor available with the NT Server 4.0 Resource Kit. It allows you run Performance Monitor as a service, not as a foreground application. This can help reduce the overhead of running Performance Monitor. For this tutorial, we will be using the standard Performance Monitor software that comes with NT 4.0 Server and Workstation.

Once you have decided where to run Performance Monitor, the next step is to decide how many SQL Servers you want to monitor. Performance Monitor can monitor up to 10 different servers simultaneously. If you decide to monitor more than one server at a time, then you will definitely want to dedicate a computer, such as a NT 4.0 Workstation, for this task. If you need to monitor more than 10 servers at a time, you can open multiple instances of Performance Monitor on your computer.

 

Choose Which Objects to Monitor and Log

The next step is to decide which Performance Monitor counters you want to monitor and log. Depending on how your server is configured, there may be over 400 different counters you could measure. But that is much more than you will ever need. And even if you were to try and monitor that many counters, you would generate huge amounts of data, in the range of over a megabyte a minute, depending on the log interval.

Unlike the Chart View, the Log View feature of Performance Monitor does not let you select individual counters to monitor. Instead, you must select entire Performance Monitor Objects. This means that if a particular Performance Monitor object has 25 counters in it, and you only want to monitor one of them, you will still get all 25 counters. There is no way around this.

So you must decide which Performance Monitor Objects have the counters you want to monitor, and then add them to the Performance Monitor Log View. Keep in mind that the more Objects you choose, the more data is collected, and the greater the resources used to collect the data. While you will want to collect all the Objects you need, don’t collect data on any Objects you won’t use.

While each DBA has there preference on which Performance Monitor Objects to monitor and log, I usually stick to these:

  • Memory

  • Network Segment

  • PhysicalDisk

  • Processor

  • Server

  • System

  • SQL Server: Access Methods

  • SQL Server: Buffer Manager

  • SQL Server: General Statistics

  • SQL Server: Locks

  • SQL Server: SQL Statistics

For the most part, these Objects includes all the counters I use to monitor both NT 4.0 Server and SQL Server, although I of course don’t use all of the counters in all of these Objects.

 

Determining the Collection Interval

Once you have selected the Performance Monitor Objects to collect, the next step is to determine the collection interval. The collection interval determines how often data is sampled and stored in the log. The default interval is 15 seconds. This means that all the counters for all the objects are collected and stored in the log every 15 seconds. The values for the counters that are collected are the averages for the counters over the 15 second interval. If you use an interval of 60 seconds, then the data will be averaged and collected over 60 seconds.

The shorter the interval, the more granular the data, and the longer the interval, the less granular the data. If the data is not granular enough, you may miss out on important spikes that don’t show up because all of the data for the collection interval is averaged over the collection interval.

The length of the collection interval is proportional to the amount of data that will be collected. The shorter the interval, the more data is collected, and the longer the interval, the less data is collected.

You may want to experiment with various collection intervals to find the one that is best for your environment. The collection interval I use depends on how long I am collecting the data for. Generally, if I collect data for 8 hours or less, I will often choose an interval from 30-60 seconds. But if I am collecting data around the clock, then I choose an interval from 300-600 seconds. This is because if you don’t choose a larger interval, the amount of data collected becomes overwhelming.

 

Start Collecting the Data

Now that you have made all of these decision, don’t forget to save your work before you begin collecting data. You can use the File|Save Log Settings option to save a file with all the information you have entered, such as the Objects you selected, that you can load again if need be.

Once your settings are saved, you are ready to begin collecting your data. When you start, watch the screen awhile, seeing how much data is being collected. If this amount seems like too much data too fast, then perhaps you are collecting too much data, or collecting it too often. You may want to stop the logging, make a couple of changes to reduce the amount of logged data, and try again.

If you are collecting data round the clock, you will need to periodically (like once a day) stop the log, and then restart it immediately, but using a new file name. I like to use the day’s date as my file name. If you don’t do this, then your log will get huge.

As the logs are written out, such as once a day, you next task is to either view them directly with the Chart View of Performance Monitor for a quick-and-dirty analysis, or to import the data into a SQL Server database for later analysis using Microsoft Excel.

Now read part two of this four-part tutorial, and learn how to import Performance log data into SQL Server for later trend analysis.

 

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles