How many times have you had
more than one way to do something and wanted to determine which method was
faster and/or which one used more resources? I am going to guess that most developers
have run across this dilemma more than once. So how do you decide with method
is faster–or which method requires more resources? In this two part series, I
will discuss some of the methods I use in Query Analyzer (QA) to help me decide
which coding method might be faster and use less resources.
So when we talk about how
fast a chunk of code runs, what do we really mean? Are we talking about how
much CPU the code consumes to accomplish its task, or are we talking about the
number of I/O’s the code takes to get all the data needed to return the result
set? Alternatively, are we talking about the elapsed time it takes the code to
run? In my mind, elapsed time is really the yardstick to use for how fast a
piece of code runs. Elapsed time is the time customers have to wait while a
piece of code runs. However, you still need to consider the cost of the code,
and that is where CPU and I/O come into play. I consider CPU and I/O to be the
resources consumed when running code. This two part series will look at techniques
that can be used to identify the elapsed time, and resources used when you run a
piece of code. This first article will discuss the methods I used to determine
how fast code runs, or what I call the elapsed time of a query or set of T-SQL statements.
Elapsed Time
Elapsed time is the clock
time it takes to run a piece of code. Basically, the elapsed time is the
number of hours, minutes, second, and/or milliseconds it takes to run a chuck
of code. Normally when we talk about how fast some code runs we are really
talking about the overall elapsed time. Let’s review a couple of different
methods I use to review the elapsed time of a query.
Prior to performing an
elapsed time benchmark, I first get my hands on the T-SQL code I want to
benchmark. I then paste the code into QA on a stand-alone SQL Server machine,
which is where I gather all my benchmark information. Using a stand-alone
machine isolates my testing from performance issues that might arise in a
multi-user environment. With QA, there are a couple of different ways to
measure the elapsed time of a chunk of code.
The first method I use to
get my initial benchmark for elapsed time is to use the Execution Time that is
displayed in QA. The screen print below shows where you can find QA’s
Execution Time display. I have circled the display in red.
Above I executed a single SELECT
statement 1,000,000 times. As you can see by using the Execution Time display,
which is displayed in the lower right hand corner of the QA window, this particular
set of T-SQL statements took 3 minutes, and 54 seconds to process.
Now, sometimes you need more
than just the total execution time for a T-SQL batch, or you would like to know
the milliseconds associated with elapsed time. Suppose you have a T-SQL batch
that contains a number of different T-SQL statements and you would like to know
how long each T-SQL statement runs, or how long a subset of the statements
run. In this case, you would not be able to use the QA Execution Time display
since it is for the entire batch. So in order to display execution time for
each command or subset of commands, you would need to use a different method.
One of the methods I use to
determine how long, a single T-SQL statement, or a series of statements in a
batch takes, is to capture the start and end time for each command or series of
statements in a batch for which I am interested in measuring elapsed time.
This allows me to calculate the elapsed time for a single command, or a group
of T-SQL statements. To demonstrate what I am talking about here is a chuck of
code that displays the elapsed time for two different while loops.
declare @output varchar(100)
declare @i int
declare @st datetimeset @i = 0
set @st = getdate()
while @i < 2000
begin
select @output=OrderId from Northwind.dbo.Orders
set @i = @i + 1
end
print ‘First batch completed in: ‘ +
rtrim(cast(datediff(ss,@st,getdate()) as char(10))) +
‘ seconds!’
set @i = 0
set @st = getdate()
while @i < 4000
begin
select @output=OrderId from Northwind.dbo.[Order Details]
set @i = @i + 1
end
print ‘Second batch completed in: ‘ +
rtrim(cast(datediff(ss,@st,getdate()) as char(10))) +
‘ seconds!’
Below are the elapsed times
calculated for each while loop when I ran this code against my server. These elapsed
times were produced by the calculation contained in the print statements in the
above code.
First batch completed in: 3 seconds!
Second batch completed in: 17 seconds!
By looking at the code you
can see I am using a local variable @st to capture the start time for each
while loop. At the completion of the while loop I calculate the time it took
to process the entire while loop, and display the calculated elapsed time by using
a PRINT statement. Using this method, allows me to determine the elapsed time
for each while loop. As you can see the first while loop completed faster than
the second one.