While the performance of a mission-critical application sometimes takes center stage, IT staff are always aware that the performance of the overall system affects all applications. Database management systems, operating system software, physical data storage and retrieval, and backup and recovery operations all take part in providing a solid infrastructure for applications. For many customer facing applications such as on-line transaction processing, the database management system is the key component.
DB2 systems tuning requires higher-level DBA skills, teamwork, and coordination with other areas such as operating systems support staff. Some objectives to keep in mind are:
- Ease of cost justification. Task time and effort (and therefore cost) should be easy to estimate; results should be quantifiable.
- Little conceptual training required. No in-depth DB2 knowledge should be involved in implementing the tuning tactics.
- Short learning curve. DBAs shouldn’t have to spend a lot of time learning new functions and features or attending training.
- The presence of heuristics or best practices. There should be simple, straightforward methods of implementing tasks, including standards and best practices.
With these objectives in mind, we take an in-depth look at the following three tuning strategies:
- Resource constraint tuning is an analysis of possible trade-offs among CPU, elapsed time, I/O, memory, network traffic, and other resources.
- Object tuning concentrates on the general definitions and configurations of tables, indexes, stored procedures, and other database objects.
- System tuning deals with the DB2 DBMS infrastructure that supports all of the above.
Resource Constraint Tuning
There are always perceived resource constraints, so there are always opportunities for resource constraint tuning. Most of this tuning revolves around “trading” short-term constraints for other resource availability. The usual trade-offs are among CPU, elapsed time, I/O, and memory usage.
What you analyze: Memory usage in the DB2 virtual pools, CPU usage, the I/O subsystem, application throughput, and data availability.
Using what tools: The primary tuning tools are the performance reports available from SMF records, particularly the Accounting Report. You also may use the Resource Management Facility (RMF), but that tool requires more expertise.
Where to look for the largest productivity gains: This depends on your particular resource constraints. A few common trade-offs include:
- CPU for Disk storage. Disk allocations for large tablespace objects can be reduced by implementing data compression. This occurs during tablespace creation by using the COMPRESS YES parameter. Most character data and some numeric data can be stored in a compressed format; this shortens table rows. The trade-off is that some additional CPU may be consumed by compression and decompression of the data; the exact trade-offs depend on your CPU configuration.
- Disk storage for elapsed time. It may be possible to shorten the execution elapsed times of SQL statements by adding indexes. One way is to specify an index containing sufficient table columns to allow “index-only” access paths for some queries. Of course, the additional indexes will require more disk space. They also may result in longer-running Reorg and Load utilities and could affect any data recovery situation.
- Memory for I/Os. Allocating additional memory to the DB2 subsystem allows for storing more application data simultaneously in memory. This reduces the frequency of writing updated pages to disk, thus reducing I/Os and also the CPU used to execute these I/Os.
These examples of resource constraint trade-offs are generic; they may not work in some situations and may actually hurt performance. Nothing substitutes for a complete, careful analysis of the costs and benefits of such changes.
First steps: Generate resource-related performance reports from SMF records, particularly Top-n reports of CPU consumed, synchronous I/O waits, and elapsed times. Choose what window (e.g., Top-20, Top-50, etc.), what environment (e.g., development, test, etc.) and what intervals (e.g., daily, weekly) to report. Concentrate on studying the worst offenders. Remember that some long-running or resource-intensive processes—such as those that process large amounts of data or do extensive data analysis—won’t benefit from tuning. After you choose some resource “offenders,” direct your analysis toward possible resource trading as previously described.
Object tuning will involve one or more of the following: The enterprise data model, DB2 object creation parameter settings, standards for executable code, and options for enforcement of business and data integrity rules. Here, the focus is on object definitions and data availability. This requires an in-depth knowledge of the various parameters and settings that control how and where objects (e.g., tablespace partitions, indexes, stored procedures, etc.) are created.
What you analyze: A typical list would include index management, tablespace partitioning, object configuration parameters (e.g., DASD allocations, embedded free space, etc.), data set placement, data modeling, and referential integrity.
Using what tools: For object tuning, there are few no-cost tools. Some software vendors provide “advisors” that make recommendations on objects, and some data modeling tools can be configured to generate a Data Definition Language (DDL) for objects that follow certain standards. After that, the DBA is probably reduced to using whatever tools (or raw SQL statements) are available that can query the DB2 catalog tables to get object definitions.
Where to look for the largest productivity gains: Few specific tactics apply to all IT environments. Much depends on your standards and how consistently they’ve been applied over time. Many object-related tuning issues can be traced to one or more of the following:
- Poor choice of clustering index. Often, the data architect must create a database design without knowledge of the SQL to be used against the tables. The result is indexes that support only the primary key and uniqueness constraints, without any consideration for query performance. Tables with a single index are considered to be clustered on the keys of that index, with some exceptions. Clustering by primary key isn’t always the best design for performance. To learn more, see the Performance Monitoring and Tuning chapters in the DB2 Administration Guide.
- Poor choice of tablespace partitioning scheme. Many data architects don’t consider physical partitioning schemes because they’re physical, not logical. However, physical partitioning can be used in many instances to increase performance. A few examples: 1) Partition by date can make purging of old or unneeded data a simple matter of emptying a partition rather than executing a long row-by-row purge process; 2) Utility processing, where reorgs and recovery can occur on a partition basis; and 3) Partition by one key, cluster by another; this is table-based partitioning, introduced in DB2 V8 (see the manuals for specifics).
- Poor choice of indexes to support queries. Often, the data architect or DBA will create indexes they feel will provide good performance. However, without specific measurements of what indexes are used and how often, it’s common to end up with either too many indexes unrelated to specific performance requirements or unused indexes. For example, to take advantage of the new fast traversal block (FTB) index processing in DB2 V12, indexes must be unique with a key size of 64 bytes or less.
- Lack of resource consumption and resource cost metrics. DBAs sometimes spend time and resources tuning something that ends up “better” but with no way to show that they tuned the right thing.
First steps: Along with establishing clear standards for object creation and consistently applying them, the most important steps are to research and generate a cross-reference of objects accessed by plan and package, including type of access. This is commonly called a CRUD (Create, Read, Update, and Delete) matrix or, more properly, a data-to-process CRUD matrix. The CRUD matrix allows a multi-dimensional analysis of object access. One can analyze the set of processes that access one or more objects, or the objects accessed by one or more processes. This allows the analyst to understand possible points of contention and pinpoint objects whose allocation and configuration should be considered in more detail.
Systems tuning is more enterprise-based than application-based. It involves coordinating installation, configuration, and tuning of DB2 subsystems, hardware and devices, and the interface with z/OS. It also requires knowledge of some relatively complex reports, including RMF reports. A new DBA would be unlikely to be productive in this area because the work requires considerable training and experience.
What you analyze: TCP/IP, virtual pools, logging, recovery, DB2 maintenance, DB2 configuration parameters (ZParms), the Internal Resource Lock Manager (IRLM), and the z/OS interface. Additional information is available from traces that can be turned on in the DB2 environment.
Using what tools: System-related performance reports from SMF records, RMF reports, and System Modification Program/Extended (SMP/E) reports.
First steps: Disaster recovery planning, recovery standards and jobs, software maintenance strategy.
Where to Begin
In general, a memory upgrade is the cheapest and easiest performance boost. Adding memory will not affect software licensing charges, nor will it affect the use of any third-party software tools. However, you should first analyze current memory usage and resolve any current real memory shortages before allocating any of the additional memory to your DB2 subsystem.
Next, design and configure your DB2 subsystem to ensure that no paging occurs. This is when DB2 memory (usually virtual pools used for reading application databases) is written out to disk storage due to a lack of memory. To better analyze your pool storage take advantage of Display commands such as the “DISPLAY BUFFERPOOL SERVICE=n” command. This will show how many memory page frames are being used. Also research the MVS command “DISPLAY VIRTSTOR,LFAREA”.
Additional memory also serves a purpose by allowing you to define some application tables as in-memory objects. This reduces excess CPU related to I/O operations. This works for static, small-sized objects such as code tables that are usually read-only.
DB2 system tuning is one aspect of overall application performance tuning. A poorly performing DB2 subsystem will negatively affect all applications, while potentially wasting valuable hardware resources. Some IT shops use a one to three month planning and performance tuning cycle where they regularly report DB2 system performance metrics and plan for various required future upgrades.
This process must be carefully coordinated with hardware and operating system tuning and maintenance because DB2 is typically a major user of system memory and CPU. Wasted CPU can lead to an increase in software license charges, and other wasted or over-allocated resources increase expenses needlessly. For the database administrator and systems programmer this can be a good thing, as DB2 system performance tuning becomes easier to cost-justify.