Benchmarking can be a very revealing process. It can be used to isolate performance problems, and drill down to specific bottlenecks. More importantly, it can be used to compare different servers in your environment, so you have an expectation of performance from those servers, before you put them to work servicing your application.
We
deploy MySQL on a spectrum of different servers. Some may be servers we
physically setup in a datacenter, while others are managed hosting servers, and
still others are cloud hosted. Benchmarking can help give us a picture of
what we’re dealing with, be it an economy smart car, speedy BMW, or high
performance NASCAR racer.
Why Benchmark?
Simply
put, we want to know what our server can handle. We want to get an idea
of the IO performance, CPU, and overall database throughput. Simple
queries run on the server can give us a sense of queries per second, or
transactions per second if we want to get more complicated.
Benchmarking Disk IO
On
Linux systems, there is a very good tool for benchmarking disk IO. It’s
called sysbench. Let’s run through a simple example of installing
sysbench and running our server through some paces.
Install
it from the software repository as follows:
$ yum install -y sysbench
First,
you’ll want to setup the test files. Sysbench uses these to read and
write to.
$ mkdir sysb $ cd sysb $ sysbench --test=fileio prepare
Next,
you’ll want to run some tests. There are a lot of options to choose
from. I would recommend using rndrw (random reads and writes) and/or
seqrewr to get a view of how fast IO is overall.
For
example, you can run the test as follows:
$ sysbench --test=fileio --file-test-mode=rndrw run
There
are a lot of data points to look at here, but focus in on the Mb/sec. This
gives you a big picture view of how fast you can do IO to the disk subsystem.
Operations performed: 6000 Read, 4000 Write, 12800 Other = 22800 Total Read 93.75Mb Written 62.5Mb Total transferred 156.25Mb (3.4871Mb/sec) 223.17 Requests/sec executed Test execution summary: total time: 44.8083s total number of events: 10000 total time taken by event execution: 38.8548 per-request statistics: min: 0.02ms avg: 3.89ms max: 269.86ms approx. 95 percentile: 12.54ms Threads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 38.8548/0.00
There
are other more granular tests you can do, controlling fsync, or doing
async I/O. Looking at the Mb/sec though will help you compare across
different servers, or RAID systems that might be level 5 versus level 10 for
instance.
When
you’re done, be sure to clean up the test files:
$ sysbench --test=fileio cleanup
Benchmarking CPU
Sysbench
can also be used to test the CPU performance. It is simpler, as it
doesn’t need to set up files and so forth. Run that as follows:
$ sysbench --test=cpu run Maximum prime number checked in CPU test: 10000 Test execution summary: total time: 35.4654s total number of events: 10000 total time taken by event execution: 35.4475 per-request statistics: min: 3.49ms avg: 3.54ms max: 9.04ms approx. 95 percentile: 3.50ms Threads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 35.4475/0.00
So
if you want to compare across different servers, compare the average time
3.54ms above. Also, when running this test, be sure you are not hitting this
volume with other work. Preferably, it should be a volume not used by the
Operating System either if possible.
Benchmarking Database Throughput
With
MySQL 5.1 distributions there is a tool included that can do very exhaustive
database benchmarking. It’s called mysqlslap.
First,
check that you have it available on your server:
$ which mysqlslap
If
you don’t have it, you’ll have to get it from the 5.1 source
distribution. Don’t worry, it’s not as bad as it sounds. You can
compile just the client tools (not the entire MySQL server) which is faster and
less error prone to boot.
Go
to http://www.mysql.com/downloads/mysql/,
get a hold of a source tar.gz, and dig in. You may need to in yum install
dev tools if you don’t have them already, and other packages:
$ yum install glibc gcc libtool
Run
configure and tell it not to build the mysql server:
$ ./configure --without-server --disable-shared $ make
If
all goes well you’ll have mysqlslap at your disposal in the bin subdirectory.
First
thing you’ll want to do is create a mysqlslap schema, as that is where it looks
for objects. For my test below, I created a table "rtest" which
has one column, a 64 character string. The following query is particularly
badly performing because ORDER BY 1 is a no-no.
$ mysqlslap --user=root -i 5 -c 10 -q "select * from rtest order by rand() limit 1" Benchmark Average number of seconds to run all queries: 7.694 seconds Minimum number of seconds to run all queries: 7.266 seconds Maximum number of seconds to run all queries: 8.337 seconds Number of clients running queries: 10 Average number of queries per client: 1
Notice
that we can specify -i for iterations, to perform the test over and over to get
an average. And then -c for concurrent queries. This is important
because it is concurrency, and fighting for resources, which typically slows
down a running database.
Comparing
the above on two different MySQL servers will give you a good overall picture of
how fast they are. Note of course that you don’t want to have anything
else fighting for resources, so no other MySQL sessions, and preferably no
other processes or applications running on this server either.
Conclusion
Benchmarking
can be a very revealing process. It can be used to isolate performance
problems, and drill down to specific bottlenecks. More importantly
though, it can be used as a way to compare different servers in your
environment, so you have an expectation of performance from those servers,
before you put them to work servicing your application.