Benchmarking Performance of a Query – Part 2 CPU and I/O

This is the second article
in a series discussing benchmarking T-SQL query performance. The first article
in this series discussed different methods of capturing the elapsed time of a
T-SQL batch, pieces of a batch, or a single T-SQL statement.
This article will discuss how to determine the amount of CPU and I/O used when
a given T-SQL statement or a series of statements are executed.

When you are tuning your query,
you need a way to measure whether you are making improvements. You can use
CPU and I/O as a measurement tool to determine if a particular query is faster
than another is, or a particular database design is better than another one.
In this article, I will show you how to measure the CPU and I/O so you can determine
whether you are making improvements when tuning your queries.

Measuring CPU

There are a number of
different methods to determine how much CPU is consumed by your query. I am
going to show you two different methods you can use from within Query
Analyzer.

The first method, “SET
STATISTICS TIME ON,” to show CPU usage, was briefly discussed in my first
article. Using this method is useful in getting the CPU for a single statement,
but when you process millions of commands, you also get millions of lines of CPU
time statistics. If you turn on the statistics gathering process, remember you
will need to issue the “SET STATISTICS TIME OFF” to turn off the statistics
gathering processes. Since this method produces lots of output when executing
T-SQL batches, I use this method only when I have a single T-SQL statement I am
interested in measuring CPU.

The other method is to use the
@@CPU_BUSY system variable to calculate the CPU resources consumed. Since the @@CPU_BUSY
is a counter that contains the number of milliseconds that SQL Sever has used
since it was started, I only use this method on a stand-alone machine, like a
laptop, or desktop machine. If you use @@CPU_BUSY on a multi-user machine,
then the @@CPU_BUSY variable will reflect CPU used by all users, not just the
T-SQL query you are trying to benchmark. Here is an example of some code that
calculates the amount of CPU used to process two different methods of padding a
number with leading zeroes. By using this example, you can determine which
method uses the least amount of CPU.


DECLARE @I INT
DECLARE @C CHAR(8)
DECLARE @CPU_START int
DECLARE @X INT
SET @C = ”
SET @I = 123
SET @X = 0
SET @CPU_START = @@CPU_BUSY
WHILE @X < 1000000
BEGIN
SET @C = RIGHT(@I+1000000000000000000,DATALENGTH(@C))
SET @X = @X + 1
END
PRINT ‘FIRST METHOD COMPLETED IN: ‘ +
RTRIM(CAST(DATEDIFF(SS,@CPU_START,@@CPU_BUSY) AS CHAR(10))) +
‘ CPU MILLISECONDS!’
SET @C = ”
SET @I = 123
SET @X = 0
SET @CPU_START = @@CPU_BUSY
WHILE @X < 1000000
BEGIN
SET @C= REPLICATE(‘0’, DATALENGTH(@C) –
DATALENGTH(@I)+ 1) + CAST(@I AS CHAR)
SET @X = @X + 1
END
PRINT ‘SECOND METHOD COMPLETED IN: ‘ +
RTRIM(CAST(DATEDIFF(SS,@CPU_START,@@CPU_BUSY) AS CHAR(10))) +
‘ CPU MILLISECONDS!’

When I ran this code on my machine,
I found that the first method of padding with zeroes consumes the most CPU
cycles. Therefore based on the test I performed I concluded that the second
method was the most efficient method to pad a number with leading zeroes.

Measuring I/O

There is another component that
you may want to review when looking at the performance of a query. This
component would be I/O. I/O is the single most costly resource item when
processing a query. If you can significantly reduce the number of I/O’s you
are bound to speed up a query.

To display the number of I/O’s
required to resolve a query you can use the “SET STATISTICS IO ON” command.
This command will turn on the I/O statistics gathering process. Like the other
“SET STATISTIC” commands, this command turns on the statistics gathering
process, and if you want to turn off I/O statistics gathering you will need to
issue the “SET STATISTICS IO OFF” command.

To show you the value of how
the I/O statistics will help you identify the amount of resources a given query
uses I will show a simple row search example. Since the Northwind or pub
databases do not have any tables that are very large, I will first need to
build a sample table that spans a number of pages. Here is the code I used to
build my sample IOTest table:


set nocount on
create table IOTest (id int, code int, description varchar(70))
declare @i int
declare @j int
set @i = 0
while @i < 100
begin
set @i = @i + 1
set @j = 0
while @j < 100
begin
set @j = @j + 1
insert into IOTest
values(@i,
@j,
‘This is a dummy description to take up some space’)
end
end

Now that we have a large
test table lets run the following code. This code will turn on the IO
statistics gathering process and then search for a specific record. This
command will show how having an appropriate index reduces the I/O’s required to
resolve a query.


set nocount on
set statistics io on
dbcc dropcleanbuffers
select * from IOTEST where id = 50 and code = 75
dbcc dropcleanbuffers
create clustered index cdi ON IOTest (ID)
dbcc dropcleanbuffers
select * from IOTest where id = 50 and code = 75

Here is the output from this
code:


DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Table ‘IOTest’. Scan count 1, logical reads 95, physical reads 0, read-ahead reads 96.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Table ‘IOTest’. Scan count 1, logical reads 95, physical reads 0, read-ahead reads 96.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Table ‘IOTest’. Scan count 1, logical reads 3, physical reads 1, read-ahead reads 2.

Note that the I/O statistics
are reported three different times. The first one was for the first SELECT
statement, the second one was for the CREATE CLUSTERED INDEX statement and last
I/O output information was for the last SELECT statement. The first SELECT
statement took 95 logical read I/O’s to find the specific row that had an id of
50 and a code of 75. It took the same number of I/O’s to build the clustered
index. If you look at the I/O statistics for the second SELECT statement, you will
notice that by creating a clustered index, the same SELECT statement only took 3
logical reads to resolve the specific IOTest table record. The “DBCC
CLEANBUFFERS” commands used in the above script is to clean the buffer cache.
By cleaning the buffer cache SQL Server will need to repopulate the buffers the
next time the same page is needed. By performing this command, the SELECT and
CREATE CLUSTERED INDEX commands will provide accurate I/O information since
each command is starting with a clean buffer cache.

Conclusion

I hope this article gave you
a few ideas on how you can measure the CPU and I/O resources used by your
query. By using these techniques and trying different queries and/or database designs,
you can measure the difference in resources consumed to help guide you through building
queries that are more efficient and an optimized database design.

»


See All Articles by Columnist
Gregory A. Larsen

Gregory Larsen
Gregory Larsen
Gregory A. Larsen is a DBA at Washington State Department of Health (DOH). Greg is responsible for maintaining SQL Server and other database management software. Greg works with customers and developers to design and implement database changes, and solve database/application related problems. Greg builds homegrown solutions to simplify and streamline common database management tasks, such as capacity management.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles