Strategies for Improved Data Availability in DB2

Database Design for Availability

Eliminate Hot Spots

A hot spot is a physical place in the database where a lot of data access is happening. The worst situation is when a lot of inserts, updates, deletes, and selects are happening to a set of rows in one location. A flood of inserted rows requires DB2 to find space to store the data, eating up embedded free space. Simultaneous rows updates and selects can lead to deadlocks and timeouts. Hot spots are common in databases accessed by distributed applications.

Usually, hot spots can be predicted (and perhaps avoided) during database design. Take the case of an account table where new rows are assigned new, ascending account numbers. If the table is clustered by account number, the physical end of the table now becomes a hot spot where all new rows will be inserted.

Another example might be a table keyed by date, where all transactions for a particular date range are stored (i.e., clustered) close together.

Some hot spots can be avoided, or at least mitigated, during database design. The DBA can embed free space for rows to be inserted, and can schedule frequent reorgs to re-distribute rows. It may be possible to physically cluster data in ways that spread new row inserts evenly across the table. The DBA will also coordinate with the data owners to ensure that old or unused data is regularly purged or archived.

Implement Indexes

Classically, there are only four reasons to create an index:

1. To support a primary key definition;
2. To enforce uniqueness;
3. To support horizontal partitioning, or partitioning by key range;
4. To provide high-speed access to data.

To maximize data availability, more indexes are preferred. One common tendency with DB2 database designs is to create an index for every column, or set of columns, likely to be searched.

This must be balanced by the needs for data insert and update, and for database recovery. As indexes are added to a table, the time it takes to update those indexes when a row is inserted increases. Similarly, should the table need to be recovered from an image copy, each index must be rebuilt, adding to the total recovery time.

Enforce Referential Integrity (RI)

Many logical database designs (especially those that are properly normalized) contain referential integrity rules. These rules support business data relationships, such as “Customers may have zero to many Orders”. When implementing the physical database design to support these rules, the database designer and application designer must decide how these rules will be enforced. Typically the choice is whether the database management system or the application code should contain that logic.

Some database designers like to store information about business rules in the database design. These include uniqueness constraints and column constraints such as valid values. Thus, one advantage of DBMs-enforced referential integrity is that the RI rules are stored in the database. The database designer can then query the system catalog to determine these rules.

However, one downside to this method is that RI includes so-call delete rules. For example, if a Customer row is deleted, what should be done with their matching Order rows? Delete rules determine the action taken in this case, either by auto-deleting the customer’s orders or preventing the deletion of the customer row.

Why is this a disadvantage? Applications must be coded to take into account the actions the DBMS will take on their behalf!

For example, suppose that DB2 is enforcing referential integrity and the RI definition of the Customer table prevents deletion of a Customer who has matching orders. The application must be prepared to deal with the SQL error code returned when it attempts the delete. Alternatively, consider an RI definition that automatically deletes matching Orders for a Customer. When the application deletes the Customer, additional time must be taken by the DBMS to delete the customer’s orders, which takes additional time.

In cases where data availability and performance are paramount, implementing RI in the application code may be best. The application designer can develop the application code to take into account the ramifications of enforcing referential integrity, and make appropriate decisions. In the case of a customer deletion, one possibility would be to warn the user that deletes of Orders may take additional time.

Implement Partitioning

Horizontal partitioning (or partitioning by key range) is a common method of assigning data rows to separate physical media. Such separation can have many advantages.

Partitioning by date range is common. This allows application processes to access partitions independent of one another. A good example would be an archive or purge process that acts on the data in the oldest partition while new table rows are inserted in another. Another example is creating one partition per day, which allows activity on the current day’s data to be physically separated from other processes.

Another byproduct of date range partitioning is that backup and reorganize processes can be done on separate partitions. Consider banking and order entry applications. New account transactions and new orders result in inserts and updates to current data. At the end of the day, the DBA team can execute backups and reorgs of today’s data, thus providing good recoverability and good performance. Backups of partitions containing old data are not needed, and can be deferred.

Manage Data Volume

Tables of ever-increasing size usually result in longer and longer query execution times, hence the need for a purge or data archive process. There are several database design alternatives that may make purging this easier. One common possibility is to partition data such that you can purge by partition. For example, a table keyed by date where the chronologically oldest data is to be purged can be partitioned by date range. The purge process might then be designed to simply empty the oldest partition, rather than execute costly SQL delete statements to remove the rows. Such designs also lend themselves to data archival processes.

Application Design for Availability

Reduce Contention

Deadlock occurs when two or more applications contend for resources in such a way where none of them can proceed. The simplest example is when two applications contend for resources. Application X has updated resource A and is waiting to access resource B. Meanwhile, Application B has already updated resource B and is waiting to access A.

To reduce deadlocks, consider the following application design ideas.

If possible, consolidate all SQL access to a single time during application execution. Avoid executing SQL, then doing additional in-program work, then more SQL. The longer the period of time a transaction or unit-of-work takes, the greater the chance of locking out another application.

Review query-only applications and consider using uncommitted read. This option allows the application to read while acquiring few locks, but at the risk of reading uncommitted data.

Access Image Copies

Some application processes such as reporting do not always require fresh data. Rather than execute such processes concurrently with others, it is sometimes possible with the help of utilities or third-party tools to read data from backup files instead. For example, the IBM DB2 UNLOAD utility can be used to unload table data from a recent Image Copy. Other tools allow the execution of SQL statements against copies.

Data Warehouse and Publish/Subscribe

One issue hampering data availability is when data is accessed by a large number of users from a large number of applications. If these applications’ needs for data are diverse in volume and priority, contention may result. To reduce contention, the application designer can implement a data store separate from the operational data and have the required data copied at regular intervals. Many data warehouses are implemented in this fashion with nightly batch loads of the previous day’s data. Applications may then query the data warehouse without contending with critical transactions.

In addition, since data warehouse data is static, the database designer can create additional indexes to support data availability and performance. Considerations of data recoverability and update performance mentioned earlier are not as important, since the data warehouse is not a mission-critical function.

A variation of the data warehouse is the datamart. This is a data store separated from the operational data similar to a data warehouse. The datamart is usually smaller, and contains data for only a single business subject area or for only a small number of applications. Many enterprises have a number of datamarts spread across several lines of business. Data is copied from the operational system at specific times determined by a “publish / subscribe” agreement. This means that datamarts subscribe to pre-determined subsets of data that are published by the main data store on a regular basis.

Summary

Database designers and application designers can use several techniques to increase data availability. Of course, these needs must be balanced against data recoverability and security concerns. Best practices tend to include data partitioning as the primary method for separating data having different access path characteristics, as in our example of date range partitioning. In addition, designers should strongly consider publishing or copying appropriate subsets of operational data to a data warehouse or datamarts in order to reduce contention.

Source(s):

IBM     DB2 V9 for z/OS Performance Monitoring and Tuning Guide, SC18-9851     2011     

IBM     IBM DB2 for z/OS DB2 database design: Principles of peak performance     2012     

IBM     DB2 UDB for z/OS: Design Guidelines for High Performance and Availability, SG24-7134     2010     

See all articles by Lockwood Lyon

Lockwood Lyon
Lockwood Lyon
Lockwood Lyon is a systems and database performance specialist. He has more than 20 years of experience in IT as a database administrator, systems analyst, manager, and consultant. Most recently, he has spent time on DB2 subsystem installation and performance tuning. He is also the author of The MIS Manager's Guide to Performance Appraisal (McGraw-Hill, 1993).

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles