Database as a Service: Performance Tuning

Many businesses want to get their latest applications in front of customers as quickly as possible. In order to shorten total development time, they have turned to providers of database as a service (DBaaS) to manage database creation and maintenance. Delegating these resource-intensive tasks to outside experts relieves the business from hardware acquisition and installation, software selection and licensing costs, and staffing for all these functions; however, after the application is up and running what happens as the database gets larger, the number of customers grows, and the transaction load on the system begins to strain resources? Who is responsible for performance tuning, and what pieces should be managed first?


When rolling out a new application, a business doesn’t expect it to be used immediately by hundreds of thousands of users executing millions of transactions per hour. (At least not at the beginning!) Consequently, application performance tuning is not initially a high priority. Developers give primary focus on the fast rollout of the application, assisted by delegating database development and management to an outside firm that provides Database as a Service (DBaaS).

As the application grows and the customer base matures, performance issues may occur. There are several categories of these, and in the DBaaS environment they must be handled differently than in standard, locally-hosted database environments. There are two major issues related to the increasing volume of data and transactions, as well as the issue of application growth.

Scaling Up — the Expanding Database

For some applications, an expanding database may not be an issue. Typical transactional systems, such as order entry, access tables based upon unique keys such as account number and customer number. Since uniqueness is commonly enforced by the database management system using indexes, these indexes provide fast access to specific accounts and customers.

There are, however, many applications that require table scans or joins of multiple tables. These applications execute searches for items with specified characteristics, and the searches may involve scanning the entirety of several tables. Hence, as the tables grow in size the scans take longer to execute and return more data to display. Table joins may take longer as well, especially if the joins are on non-unique columns. The situation gets more complex if the DBaaS provider uses a cloud platform for data storage, because they may have no control over the physical implementation.

To alleviate the problem of the expanding database, the service provider has several options.

Purge old/stale data more often. As an application matures some of the data becomes old or infrequently used. This might include customers that have not ordered anything within a certain period (such as one year), products that have never been ordered, or orders that have been shipped and delivered. Such data is usually kept for a time due to regulatory reasons or in case there are questions or problems. One common method is to transfer the old data to a different database (the archive). The current database then remains at a reasonable size, while the archive database is available for access if needed. This method requires an application upgrade in order to implement features to access the archive.

Re-design the database for better performance. There are several options that the database administrator (DBA) has for logical and physical design that can affect performance. These include various data partitioning schemes, management of multiple table indexes, in-memory databases, and others. Unfortunately, most of these options require a lot of technical work on the part of the DBA, and this time and effort will be charged to the application owner. One way to avoid this situation is for the application owner and the DBaaS provider to participate in a database modeling session prior to implementation. They will review questions such as expected growth in the customer base, possible transaction volume increases, and the projected size of the database. A well-performing design can then be implemented at the start.

Upgrade hardware and software. The easiest but probably most expensive option involves hardware and software changes, including faster CPUs, more and faster memory, larger and faster disk drives, and even a new version of either the operating system or the database management system (or both). While throwing resources at the problem will certainly provide a performance boost, this type of solution ignores the potential underlying causes of performance issues due to database growth. A better approach would be to use monitoring and diagnostic software to determine what resource(s) are constraining application performance. The support provider can then use this information to determine the best places to provide more or faster resources.

Scaling Out — More Customers and More Transactions

Another growth area is the increase in application use over time. As application use increases the amount of data transferred between the customer presentation logic and the database manager also increases. Returning customers also tend to execute larger and longer transactions as they search for new options, try different application features and order more products and services. As a result, the transactions themselves get larger and the corresponding database access requests become more complex and more frequent.

Transaction growth tends over time to increase the users perceived interaction time. Even though the underlying database access may be happening in milliseconds, the growth in complexity translates to more complex queries and more tables accessed, resulting in longer elapsed times. In addition, unless the application owner and DBaaS provider have configured their network to handle the increased traffic, the network itself may start to become part of the bottleneck.

From the DBaaS provider’s point of view, the standard answer to growth in transaction volume is to increase or upgrade CPU speeds and memory sizes. This allows for better buffering of data, implementation of in-memory tables or indexes, and faster processing. Faster network hardware is generally not a good solution, since one must upgrade the hardware to accommodate the worst possible transaction load, which usually only occurs for a few hours of a day. At other times, there may be almost no network activity, and the new and expensive hardware is wasted.

A better solution to transaction growth is physically distributing the database and its processing. One way would be to make multiple copies of the static portion of the database (say, the list of available products and services with their price data) and store these copies in different physical locations across the geography of the customer base. Customers using the application in one location would access the local copy of this data, and such access would be expected to be quick due to its proximity. Similarly, other customers in other locations would also access local copies.

Application Growth — More Tables, More Features

This kind of growth happens as the application matures. A new application starts with a small user or customer base and expands with time. As new customers arrive the apparent success of the application inspires the owner to add new features. An order entry system may expand from simply offering a small set of products to include services as well. Customer complaints may lead to implementing a product return and replace system. As the database grows the application owner may wish to analyze their accumulated sales data to predict what types of customers will order what categories of products in the future.

Classic application design includes such tasks as enterprise data modelling and logical and physical database design.  The staff of the enterprise executes these tasks, which require certain experience and skills. With the advent of DBaaS, these tasks may not be done at all.

In order to direct application growth a data model is a necessity. Without such a model, the database designers run the risk of creating tables with duplicate columns, poor domains, or missing referential integrity. Consider an application dealing with customers. Typically, the database contains a customer table that identifies these customers. If the application then expands to include a second ‘category’ of customer it is possible that the DBaaS provider would then create a second customer table, with potentially a different type of customer number. Imagine trying to report on customers when one set has a numeric identifier and another uses alphabetic characters, or the customer numbers have different lengths. A data model helps prevent such issues.


While today’s computer hardware and database management systems provide  fast computing and data access, an application owner that delegates database storage and database management to a service provider runs the risk of losing control of application performance. The various performance issues of database growth, transaction volume growth and application feature growth require different solutions, and some of these solutions may not be available to the provider. Applications that are considering database as a service should review these potential performance issues with their provider before signing any service contract. In this way, you can avoid the disappointment of developing and implementing a profitable application that performs worse and worse as time goes on.

See all articles by Lockwood Lyon

Lockwood Lyon
Lockwood Lyon
Lockwood Lyon is a systems and database performance specialist. He has more than 20 years of experience in IT as a database administrator, systems analyst, manager, and consultant. Most recently, he has spent time on DB2 subsystem installation and performance tuning. He is also the author of The MIS Manager's Guide to Performance Appraisal (McGraw-Hill, 1993).
Previous article
Next article

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles