How to Build a Profile Script to Monitor SQL Server off Hours

SQL Server Profiler is a valuable tool for monitoring
activity against your SQL Server environment. With this tool, you can monitor
numerous events depending on what kind of performance problem you are trying to
track down. This tool is normally used in real time, where the events being
monitored are displayed on your computer monitor. However, displaying the
information in real time doesn’t always work for every situation you might be
monitoring. What if you have a performance problem that routinely occurs in
the middle of the night, when you are normally sleeping? I’m sure most DBAs
would rather sleep, than stay at work so they could monitor a performance problem
in the middle of the night. Well don’t lose any sleep, read the rest of this
article and find out how to build a Profiler script that can be run off hours.

Scripting a Profiler Trace

There are two key pieces to running Profiler traces at
night. The first one is to be able to script a Profiler trace definition. The
second important piece is to output the events being traced to some place other
than a computer monitor. Let’s look at how you can script a trace and log the
output to a location other than the screen.

There are two methods to script a Profiler trace. The first
method is to dig into Books Online and determine all the stored procedures
(SPs) necessary to identify all the events you want to trace and how to start/stop
your trace; but this would be the hard way to go about scripting a trace.
Microsoft has made it easy to script a Profiler trace, by providing a scripting
option built into the GUI Profiler tool.

Microsoft has provided two different options for saving
Profiler trace output. Output can go to a log file or a SQL Server table. The
performance overhead of saving your trace to a file is much less than saving
trace output to a SQL Server table. The advantage of saving trace information
to a table is you can then use T-SQL commands to analyze your trace data. You
need to decide which output format works for you.

Before I can demo how to use Profiler to create a Profiler
script I need to determine what trace events I want to capture. For the
purposes of my demo below, I’m only going to log the start and stop events for
T-SQL statements and the executions of SPs. As stated above I am going to use
the Profiler GUI to create my Profile trace script. To start Profiler I click
on the “Start” button, place my mouse on the “All Programs” item, then hover the
mouse over the “SQL Server 2005” item, in the drop down I hover over the
“Performance tools” icon, and finally click on the “SQL Server Profiler”
item. When I do this, the following window is displayed:

To build my trace definition and create my Profiler trace
script, I click on the “File” menu item, select the “New Trace” item and then
connect to the SQL Server I want to monitor. Doing this will bring up the
following screen:

On this screen, I can identify the name of my trace, the
template I want to use, where I want to save the trace output, and time I want
my trace to stop. For my demo I will be creating a trace that is named
“AdventureWorks Performance problem”, which saves the Profiler trace events to
file named “c:\temp\AW_problem.trc”. Once I fill out the above screen to meet
this requirement my screen will look like this:

Note that I have not checked the “Server processes trace
data” option. When this option is checked, SQL Server will make sure every
event is written to your trace file, even if it degrades performance. I also
enabled the trace to stop but left the default stop time. Which you can see the
stop time is not in the middle of the night. I will discuss setting the appropriate
stop time option later on in this article.

If I wanted to save my trace to a SQL Server database,
instead of a file I would use the “Save to table” checkbox. Below is a
screenshot of where I identified my output to go to a table, instead of a log
file:

Here I have identified that my trace output would be saved
to the PerformanceProblem table in the AdventureWorks database. I could use
either one of these options to capture my trace information. The save to a
file option incurs less overhead, so I normally use that method. For my demo below,
I will be using the saved to a file option.

Now that I’ve identified the output of the Profiler events, I
need to identify the events I want to monitor. I do that by using the “Event
Selection” tab on the “Trace Properties” window. Here is a screen shot of the
events I selected:

All I wanted to capture for my trace is the SP and the
Statement starts and stops events. Then for each of these events I captured the
TextData column so I would able to determine what batch or statement is being
executed. For each SP or statement, I wanted to know the CPU, Read, Write,
and Duration information, so I can determine the amount of resources each batch
or statement consumed. I use the StartTime and EndTime to help me know the
actual time frames each SP and statement was executed. Lastly, I wanted to
know the DatabaseName so I know which database to associate each SP or
statement event. Note that if an application uses fully qualified names in
their queries then the DatabaseName might be misleading, because it contains
the name of the database from which the SP or statement was executed, not the
database in which the SP or statement processes against. I could have filtered
my Profiler trace down to a single database, but remember I’m setting up a
trace definition to resolve a performance problem off hours. If the
performance problem is associated with another application running that is
consuming resources at the same time as my poorly performing application this
would be valuable information to know. This is why initially I am not going to
filter my trace to just log events from the AdventureWorks database.

My goal of going through the Profile GUI is not to start a
trace, but to generate the Profiler script of the trace definition I just
specified that I would need to start my trace. After my trace is started, I
will immediately stop the trace and create the script. So, on the above screen
I will click on the “Run” button. This will bring up the following screen:

As I already stated I will stop my trace as soon as it
starts. To stop my trace I can either click on the red square icon on the
toolbar, or go to the “File” menu and then selecting the “Stop Trace” option.

Once my trace is stopped, I can create my Profile trace
definition by using the “Export” function. In the screenshot below you can
see how I used the File menu item, to navigate to the “Export” item, then selected
the “Script Trace Definition”, so I can create my Profiler trace script:

After I click on the “For SQL Server 2005…” items, I am
prompted for a place to store my T-SQL script for this trace. For my
particular trace above, the following script was generated. You need to look
closely at this script, because a few things will need to be changed before I
run it off hours:


/****************************************************/
/* Created by: SQL Server Profiler 2005 */
/* Date: 10/17/2007 05:46:05 PM */
/****************************************************/

— Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
declare @DateTime datetime

set @DateTime = ‘2007-10-17 18:32:58.000’
set @maxfilesize = 5

— Please replace the text InsertFileNameHere, with an appropriate
— filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .trc extension
— will be appended to the filename automatically. If you are writing from
— remote server to local drive, please use UNC path and make sure server has
— write access to your network share

exec @rc = sp_trace_create @TraceID output, 0, N‘InsertFileNameHere’, @maxfilesize, @Datetime
if (@rc != 0) goto error

— Client side File and Table cannot be scripted

— Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 45, 16, @on
exec sp_trace_setevent @TraceID, 45, 1, @on
exec sp_trace_setevent @TraceID, 45, 9, @on
exec sp_trace_setevent @TraceID, 45, 17, @on
exec sp_trace_setevent @TraceID, 45, 18, @on
exec sp_trace_setevent @TraceID, 45, 35, @on
exec sp_trace_setevent @TraceID, 45, 12, @on
exec sp_trace_setevent @TraceID, 45, 13, @on
exec sp_trace_setevent @TraceID, 45, 14, @on
exec sp_trace_setevent @TraceID, 45, 15, @on
exec sp_trace_setevent @TraceID, 44, 1, @on
exec sp_trace_setevent @TraceID, 44, 9, @on
exec sp_trace_setevent @TraceID, 44, 35, @on
exec sp_trace_setevent @TraceID, 44, 12, @on
exec sp_trace_setevent @TraceID, 44, 14, @on
exec sp_trace_setevent @TraceID, 41, 15, @on
exec sp_trace_setevent @TraceID, 41, 16, @on
exec sp_trace_setevent @TraceID, 41, 1, @on
exec sp_trace_setevent @TraceID, 41, 9, @on
exec sp_trace_setevent @TraceID, 41, 17, @on
exec sp_trace_setevent @TraceID, 41, 18, @on
exec sp_trace_setevent @TraceID, 41, 35, @on
exec sp_trace_setevent @TraceID, 41, 12, @on
exec sp_trace_setevent @TraceID, 41, 13, @on
exec sp_trace_setevent @TraceID, 41, 14, @on
exec sp_trace_setevent @TraceID, 166, 1, @on
exec sp_trace_setevent @TraceID, 166, 9, @on
exec sp_trace_setevent @TraceID, 166, 35, @on
exec sp_trace_setevent @TraceID, 166, 12, @on
exec sp_trace_setevent @TraceID, 166, 14, @on

— Set the Filters
declare @intfilter int
declare @bigintfilter bigint

— Set the trace status to start
exec sp_trace_setstatus @TraceID, 1

— display trace id for future references
select TraceID=@TraceID
goto finish

error:
select ErrorCode=@rc

finish:
go

If I try to run this script, it will fail. The scripting
process doesn’t exactly mirror the GUI configuration of my trace. So before I
get into how to run this script off hours let me review the different pieces of
this script in a little more detail. If you look at this trace definition
script in detail, you will notice that there are three different stored
procedures that are executed: sp_trace_create, sp_trace_setevent, and
sp_trace_setstatus.

The “sp_trace_create” SP is used to identify the trace
definition to SQL Server. When this script was created, it didn’t set all the
parameters that I specified on the GUI above, like the output file name, and
roller options. In addition, the stop time wasn’t in the middle of the night
which isn’t what I want. Therefore, to get this script to mirror my trace specification
I will need to change the “sp_trace_create” call to look like this:


set @DateTime = ‘2007-10-18 02:00:00.000’ — set an appropriate stop time
set @maxfilesize = 5
exec @rc = sp_trace_create @TraceID output, 2, N‘C:\temp\AW_problem’, @maxfilesize, @Datetime

Here I have modified the original script code by:

  • Identify the time in which Profiler will stop its trace by
    modifying the @DateTime variable value to 2 AM in the morning.
  • Set the TRACE_FILE_ROLLOVER option to 2, which means when my
    profiler trace reaches the max file size of 5 MB, Profiler will close the existing
    file and create a new one.
  • Specified an output file of “C:\temp\AW_problem”

Just a note here about the trace output file, when you
specify a trace output file, and that file already exists the “sp_trace_create”
execution will fail. You need to specify a file that doesn’t exist. Since I
ran the GUI to create my Profiler trace script, the above file already exists. So
I will need to delete this file prior to running my script. Profiler appends a “.trc”
extension to the output file specified on the “sp_trace_create” call. This is
why I left off the “.trc” extension when I specified the output file name.

The “sp_trace_setevent” SP calls in the above script
are used to identify the events and columns I want to trace. In my case, I
want to trace 4 different events and a number of columns for each. You should
not have to modify this section of code in a generated Profiler script, unless
you want to include or exclude events and columns after the script is generated.

The “sp_trace_setstatus” SP is used to change the
status of a trace. There are 3 different statuses that can be set using this
SP. You can “start” a trace that is not running, you can “stop” a running
trace, or you can “close and delete” a trace that is not running. The script
that is generated from the Profiler GUI will automatically start the trace. Depending
on how you are going to launch this script, you may or may not need to change
this option. For my demo, I will not need to change the parameters for this
SP.

Running a Profiler Trace off Hours

Now that I have my Profiler trace script, I need to
set up a process that will allow my script to start collecting data at a particular
time in the middle of the night. So for this example let’s say I want to have
my script run at 1 AM tomorrow morning and I want it to run for one hour. To
do this I would create a SQL Server Agent job to start at 1 AM, which runs the above script. I just need to remember to set the filename and stop time (@DateTime)
appropriately in the script within the job step. In my case, I set the stop
time to be 2 AM in the morning.

Once I have set up my script setup, I place it in a SQL
Server Agent Job, and then schedule the job to run at 1 AM in the morning. This allows me to go home knowing that SQL Server Agent will do my Profiler
monitoring automatically off hours while I sleep.

Viewing the Saved Trace Data

Anytime after my SQL Agent job runs and the trace
stops I can view the Profiler trace information that was collected. To view
the trace, I start Profiler by clicking on the “File” option from the menu,
then moving the mouse over the “Open” item, and finally selecting the “File
Trace” from the drop down menu. When I do this, a file open dialog is
displayed allowing me to browse to the location where my SQL Server Agent
created my off hours trace file that I identified in my Profiler script.

Conclusion

If you have performance problems off hours, do you
really want to come in to diagnosis what is happening on your SQL Server
machine? Generating a Profile script and running it off hours allows you to
capture what is happening inside of SQL Server without actively monitoring SQL
Server from your desktop. Don’t lose sleep next time you want to monitor something
off hours, instead script out a Profiler trace, and launch the trace using a
SQL Agent job.

»


See All Articles by Columnist
Gregory A. Larsen

Gregory Larsen
Gregory Larsen
Gregory A. Larsen is a DBA at Washington State Department of Health (DOH). Greg is responsible for maintaining SQL Server and other database management software. Greg works with customers and developers to design and implement database changes, and solve database/application related problems. Greg builds homegrown solutions to simplify and streamline common database management tasks, such as capacity management.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles