Database Journal
MS SQL Oracle DB2 Access MySQL PostgreSQL Sybase PHP SQL Etc SQL Scripts & Samples Links Database Forum

» Database Journal Home
» Database Articles
» Database Tutorials
MS Access
Database Tools
SQL Scripts & Samples
» Database Forum
» Slideshows
» Sitemap
Free Newsletters:
News Via RSS Feed

follow us on Twitter
Database Journal |DBA Support |SQLCourse |SQLCourse2

Featured Database Articles


Posted October 17, 2016

Basic DB2 Application Tuning Strategies

By Lockwood Lyon

Assume you’ve been given a directive by IT management to "tune" things. What strategies are available and how do you judge which ones are best? Considering the current economic climate, can any be implemented on a shoestring budget? Can any of these tasks be assigned to resources with basic DBA skills and still be productively accomplished?

A few of the many strategies for approaching and overseeing application tuning efforts include:

  • Tune the sacred cows. Direct all tuning efforts at those applications and systems most critical to the business.
  • Tune the squeaky wheels. Spend time reacting to those customers that complain the loudest.

There are other strategies; perhaps you’re already using one or more of them. Most of these tuning strategies require higher-level DBA skills, teamwork, and coordination with other areas. However, in the current economic environment, IT management may not have the luxury of allocating sufficient DBA resources to tuning efforts that don’t produce immediate, quantifiable results. With limited time and resources you will need to focus on strategies that provide tangible benefits at a minimal cost. Goals to keep in mind are:

  • Ease of cost justification. Task time and effort (and therefore cost) should be easy to estimate; results should be quantifiable.
  • Little conceptual training required. No in-depth DB2 knowledge should be involved in implementing the tuning tactics.
  • Short learning curve. DBAs shouldn’t have to spend a lot of time learning new functions and features or attending training.
  • The presence of heuristics or best practices. There should be simple, straightforward methods of implementing tasks, including standards and best practices.

With these objectives in mind, we look at the following two application tuning strategies::

  • SQL tuning involves SQL review and knowledge of potential access paths, table and index data distribution, and statistics.
  • Application tuning focuses attention on suites of programs that access particular tables, batch windows, and online service levels.

How do you tell what kinds of tuning are most needed in your environment? First, let's discuss typical symptoms.

SQL issues always exist, although specific symptoms may be difficult to detect. Typical causes range from poor coding techniques, poor understanding of SQL coding standards, lack of understanding of the data model, and so forth.

Addressing application issues requires a more broad knowledge of application behaviors, access paths, batch windows, DB2 configuration parameters, and how all those tie together. There’s a learning curve, but the necessary information should be documented—somewhere. Typical symptoms include complaints about high-usage resources, application delays due to deadlocks and timeouts, and missed service levels.

SQL Tuning

SQL tuning is probably the easiest tuning approach. It doesn't require lots of DB2 systems expertise and uses minimal tools. You can get by with simple SQL statements executed from Query Management Facility (QMF) or SQL Processor Using File Input (SPUFI). In DB2 you can use the Explain facility on either a single SQL statement in the dynamic SQL cache or on the entire cache, with access path information going into a DB2 table for later analysis. See the Application Programming Guide and the SQL Reference for more information.

What you analyze: access paths, object statistics.

Using what tools: Basic Explain, or perhaps a software tool that provides a user-friendly front-end to explain information. You also may run DB2 catalog management reports or queries, and even have access to a DB2 object statistics display or monitoring tool. One common example is the IBM Optimization Service Center.

Where to look for the largest productivity gains: For SQL coding, user training provides the best return on investment. Provide developers with clear, consistent SQL coding standards, coding examples, typical ways of accessing common tables, and descriptions of indexes and their uses for query performance. If possible, give them access to the Explain facility or the Optimization Service Center so they can analyze access path information themselves.

First steps: To get started, you’ll need several things. SQL coding standards are a good start. Also useful is a "Top-50 SQL" report, which lists the 50 (or whatever number you deem best for your environment) longest-running SQL statements. This data is available in Service Management Facility (SMF) records, and can be limited and filtered by DB2 environment (i.e., development, test, stage, production), by application, or other criteria. SMF reports are described in the IBM publication, IBM DB2 Performance Expert for z/OS Report Reference (SC18-7978). Regularly run these reports and review the SQL statements. Poorly performing statements are candidates for tuning, while those already addressed can be optionally excluded from later reports.

Finally, you’ll need a process for regular Explains and reviews of production SQL statements. One common tactic for static SQL statements is to Bind all plans and packages with EXPLAIN(YES), ensuring you have current access path information available. After that, develop processes for detecting production access path changes and reviewing potential performance problems.

Application Tuning

Application tuning isn’t complex, though it encompasses many things. This type of tuning requires knowledge of multiple user applications and their characteristics, such as table access, batch windows, restart requirements, SLAs, and relevant DB2 configuration parameters.

What you analyze: The short list includes online transaction processing, data warehouse activity, vendor software package performance, deadlocks and timeouts, Bind parameters, information in the DB2 subsystem Job Entry Subsystem (JES) log, units of work, commit frequency, referential integrity, and batch cycles. Each category provides a window into certain performance characteristics of an application or suite of applications.

Using what tools: Your most important tool will be the performance reports generated from SMF records. The Accounting reports are useful for analyzing specific plans and packages while the Statistics reports show system-level information. In addition, you should have available documentation related to standards for application unit-of-work and commit frequency and an enterprise data model (or, at a minimum, data models for the objects accessed by the applications you’re reviewing). You should create these if you lack them.

Where to look for the largest productivity gains: The best places for productivity gains are in addressing issues with SLAs. Often, this is related to either elongated elapsed times (especially for batch processes) or resource-intensive queries in an online environment. For long elapsed times use the performance reports to determine where the time is being spent. If the time is spent waiting for resources, consider ways to reduce the wait time. For resource-intensive queries consider ways to "trade" other available resources for the constrained resource, as previously discussed.

First steps: Develop Top-n reports for high-resource usage among your batch jobs, online transactions, and stored procedures. These can then be dealt with individually or in groups. You may discover that several of your top long-running processes execute similar SQL queries against the same tables. You must then investigate more deeply to determine if this is due to poorly written SQL, highly volatile resources (where deadlocks and timeouts elongate elapsed times), unavailable resources, or other causes.

For example, consider bulk load processing. Mass insert of data can be a critical process, as it can lock data during loading. Review the requirements for the DRDA fast load option in DB2 v12 to give a performance boost to data loads from client processes.

Where to Begin

No matter what areas of tuning you decide to begin with, one thing is certain: you’ll have a lot of work to do! To reduce your workload, concentrate on automating any reporting or analysis processes. A good place to begin is to set up, review, and automate your "Top-n" reports.

Choose the report or reports relevant to your tuning area and implement a regularly scheduled process to produce the reports. Schedule regular meetings with the appropriate people to review the results

For example, if you’re concentrating on SQL tuning, set up regular reports of your Top-n SQL statements. Choose your criteria for reporting: Total elapsed time, total CPU time, and total synchronous I/O wait time are common items. Run these reports daily or weekly and review them with your peers or with leaders in application areas.

Regularly run and review available health checks or develop your own. Develop plans to address issues when they arise.

Even the most frugal shops have helpful tools. Discover what IBM and third-party tools exist and determine their utility for your needs. Pick what looks like the most useful tool and learn it.

See all articles by Lockwood Lyon

DB2 Archives

Comment and Contribute


(Maximum characters: 1200). You have characters left.