Scripting Traces for Performance Monitoring on SQL Server
July 30, 2003
The SQL Profiler is a great tool for monitoring and analyzing SQL Server Performance. I use it all the time to watch the detailed actions of a stored procedure, trigger or user-defined function (UDF) that I am developing. It can also be used to monitor aggregate performance of an entire SQL Server instance, a single database, or to isolate performance problems. When you are interested in overall performance, using SQL Profiler over time, by that I mean every day, improves your knowledge of your system and its performance characteristics and provides the information you need to spot trends and changes of behavior.
SQL Profiler is a graphical interface to SQL Server's trace capability but it is not the only way to run a trace. Calling a group of system stored procedures, whose names all begin with the characters "sp_trace_", can also create them. In fact, SQL Profiler uses these stored procedures behind the scenes to do the tracing that you request.
Traces can be sent to the SQL Profiler window, a database table or a sequential disk file. However, tracing is not resource free. In fact, it can consume considerable resources, particularly if you send the trace to SQL Profiler. Since the most important time to monitor is the time of maximum server load, you do not want the act of measurement to slow the system unnecessarily. For most applications that run in a business environment, peak activity occurs in the middle of the afternoon.
The disk file is by far and away the fastest destination for a trace and it interferes the least with the performance of SQL Server. However, for analysis purposes, a table is the best destination. Once the data is in a table, the information can be sliced and diced to your heart's content. Most importantly, trend reporting, using data from multiple days, is possible.
This article presents a technique for writing a stored procedure that creates a trace. To achieve the best performance the proc sends its output to a file on disk. A second step moves the data from the trace file into a SQL table for analysis. The second step doesn't have to be performed exactly when the data is recorded. In fact, it's usually loaded overnight when server utilization is low and then followed by any standard reports. By using the file as the trace destination and loading the data overnight, this solution does the best to minimize the peak-hour resources required for monitoring.
Although you could manually use the SQL Profiler to create a trace at the same time every day and send it to a file, it is difficult to be consistent and this is a task that is easily automated. By using the stored procedure and a SQL Job that runs the stored procedure, you can run the exact same trace at the same time of day, every day. This kind of data gathering provides consistent information for effective performance analysis.
The most tedious part of the process is creating the stored procedure that executes the trace. Fortunately, SQL Profiler will write almost all of it for you. With a few modifications, it is easy to turn its script into a stored procedure that can be run on a schedule.