A Perspective on Database Performance Tuning
April 16, 2010
Fundamentally, database performance tuning is done for two basic reasons, to reduce response time and to reduce resource usage, both of which can apply for any given situation. Julian Stuhler looks at database performance tuning, and why it remains one of the most important topics for any DBA, developer or systems administrator.
Last month, I covered IBM DB2 10 for z/OS: Justifying the Upgrade , and spoke briefly about the performance enhancements in that release that can help in justifying the upgrade to hard-nosed business execs looking for real return on investment for infrastructure projects.
In this article, I want to take a more general look at database performance tuning, and the reasons why it remains one of the most important topics for any DBA, developer or systems administrator. Ill also cover some golden rules for performance tuning, which tend to apply regardless of the specific platform or application youre responsible for.
Why Bother with Database Performance?
So, whats the big deal with database performance? Fundamentally, database performance tuning is done for two basic reasons, and either or both can apply for any given situation:
Some Golden Rules of Database Performance
OK, so youre convinced of the benefits of tuning, but where do you start? Many people (including me) have built their careers around the dark art of database tuning, and there is an enormous amount of product and platform specific knowledge needed to eke the very best performance out of a given application. However, there are also some generic rules, which apply to all tuning situations, and Im always amazed at how often some or all of these are ignored.
1. Define your objectives. Tuning is a journey: how will you know youve arrived if you dont know your destination? Start by clearly defining a realistic objective for the exercise, such as reduce average response time for 95% of my Order transactions to less than 1 second or increase average throughput from 100 to 150 transactions per second during peak hours).
2. Know when to quit. Performance tuning can be both fun and addictive. You can often take a 10-second query and make it run in 2 seconds with a little basic tuning, and that feels good. A little more effort might bring that down to 1.5 seconds. Spending still more time might squeeze it down to 1 second, which would give you real bragging rights at the next DBA team meeting. But wait a moment: remember Rule #1? When was the last time you set out on a journey, arrived safely and in good time, but decided to continue driving until you were late for your appointment? Tune until your objective is met, then leave it alone: there will be plenty of other queries that need your attention!
3. Prevention is better than cure.
However valuable monitoring and tuning your production system is, you
should consider it to be a last resort. The later in the application life cycle
a performance problem is detected, the more expensive it is to fix. It is far
better to define your performance objectives up front (see Rule #1 again),
design the application and database with those objectives in mind, and test as
fully as possible during development to maximise the chances that those objectives
will be met. Of course, no testing can hope to fully simulate real production
conditions and some tuning will probably be necessary when the application
actually goes live, but catch the big issues early and you can be out
celebrating that first night rather than at your desk fixing major performance
4. Identify the culprit.
In todays complex client/server, SOA-enabled, multi-platform landscape, a
bewildering number of components could be involved in any given business
transaction. The user is experiencing a response time problem, but which of the
individual components is to blame? Dont automatically assume its the database
and dive into a tuning exercise. Treat this like any good murder mystery, using
logic and perseverance to eliminate potential culprits until you have isolated
the component responsible. Sometimes that will indeed be the database, but
often it will be some other element (such as the network, the application
server, or the operating system).
Change only one thing at a time.
Youve analysed the situation and you have several potential solutions: perhaps
you could change the order of some columns in that index, or maybe move the
whole index to a different buffer pool. Decide on whats likely to give you the
biggest benefit, make that single change and re-measure. Implementing multiple
tuning actions at once makes it impossible to see the benefit of each one
individually: what if some are helping and others are making things worse?
One accurate measurement is worth a thousand expert opinions. Also known as measure twice, cut once, this is all about
making sure you understand the issue before trying to resolve it. Measurement,
via the many diagnostic and trace facilities provided by DB2 and other
databases, is key to understanding the problem and being able to formulate a
suitable tuning response.
Remember Heisenbergs Uncertainty Principle. This can be stated as Ñ c Ñ p ³ h / 4p ... or more commonly The observer
affects the observed. As per Rule #6, its not unusual to have to switch on
non-standard performance traces in order to pin down the cause of a given
performance issue, but always be aware that the very fact that youre switching
on those traces is going to make performance even worse. Trace for the minimum
period necessary, against as small a subset of the workload as possible, using
the least expensive trace required to allow your analysis to proceed.
I hope these will be useful to you in your tuning endeavours. Next month Ill be returning to the subject of DB2 for z/OS Version 10 and covering some of the exciting new features and functions in that release in more detail. See you then!
Tuning DB2: Where Your Data Is and Why it Matters