Stoptimizing Oracle

Jonathan Lewis has coined a term and its definition that, I believe, we’ll be using quite a bit:

Stoptimisation – the art of optimization by not doing the things you don’t need to do.

It’s a term we’ve needed for some time now, given the pendulum swing back to the days of yore (meaning pre-9i) with respect to database, application and query tuning. I’m seeing more and more posts in forums and newsgroups asking about tasks that I thought were long since relegated to the ‘gee, that’s really useless’ pile. Let’s look at a few of them and see why you probably shouldn’t be doing them,

Regular Index Rebuilds

Index rebuilds, for the most part, are busy work for a DBA. Yes, they can ‘clean house’ with respect to empty index blocks buried in the index tree but since the index will simply grow again to the size before the rebuild and the empty index blocks will again be scattered across the index tree I see no point in performing regularly scheduled index rebuilds. In fact, for non-partitioned indexes, I see no real reason to rebuild at all since it:

  • Locks the table and index until the rebuild is finished
  • Invalidates cursors using the associated table
  • Serves no useful purpose

Yes, there are times when local indexes on a partitioned table will need to be rebuilt, but those times are usually due to DDL against the partitioned table (dropping partitions, adding/splitting partitions, swapping partitions). The listed DDL actions can make local index partitions unusable, requiring them to be rebuilt. It’s not a ‘size’ thing, it’s not a ‘performance’ thing, it’s because the unusable index interferes with table access and thus production processes. What is troubling is the number of DBAs asking about regularly scheduled index rebuilds based on antiquated ‘criteria’ such as B-tree depth or index size. Even MOS has taken down it’s original document on when to rebuild an index and replaced it with a more responsible version that better addresses the topic. Still, there are DBAs firmly convinced that scheduling index rebuilds once a week, once a month through cron or DBMS_SCHEDULER must certainly improve the performance of queries and production processes. It doesn’t, and it really just inconveniences the end users while the index rebuilds are in progress. Since most application tables have more than one index in place, rebuilding indexes for a given table can take hours, and lock the table for the entire time, essentially shutting down production for the duration of the rebuilds. I can’t see how that improves performance.

Continually ‘Tweaking’ Database Parameters

There are DBAs who can’t seem to be satisfied with the performance of the database, and constantly hunt for ‘problem’ areas whether or not they actually affect performance. This is commonly referred to as Compulsive Tuning Disorder (henceforth referred to as CTD). CTD creates an endless cycle of ‘tune this, now tune that, now tune something else’, all based on the most minute deviations in wait statistics. CTD is one of the biggest time wasters for a DBA; it’s like Ponce De Leon’s search for the Fountain of Youth. Not every wait statistic needs to be perfectly adjusted, and it’s impossible to do in the real world. It all boils down to what the DBA considers performance versus what the end-users consider performance. In the throes of CTD, EVERYTHING is a potential ‘problem’ even if it doesn’t affect the end-user experience. On the end-user side a performance problem exists when processes take longer than they expect. Once the end-user problem has been addressed tuning should stop since further adjustments won’t provide any additional benefit. Wait statistics will never be ideal in a production system; multiple users accessing and modifying data will cause concurrency waits, no matter how small, and it’s useless to address such waits to get the response time perfect. CTD turns the most reliable of DBAs into a micro-manager of epic proportions and the efforts expended to ‘tweak’ parameters that have no bearing on the actual system performance are, honestly, useless.

Table Reorganization

As tables get larger and larger queries accessing data can gradually take longer to complete, notably if the query relies on a full table scan to fetch that data. There are still some DBAs who are convinced that reorganizing the table data will go a long way in improving performance. One of those reorganizations involves sorting the table data to improve the clustering factor. Yes, the clustering factor for an index is calculated based on the ‘location’ of the table data with respect to the sorted index keys, but ‘improving’ the clustering factor for one index usually makes the clustering factor worse for every other index against that table. Another point to make is that these are heap tables, with no real data order established; inserted data can go anywhere there is room in any data block associated with that table. Sorting the data only lasts until the first insert afterwards, after which index keys again become scattered through the table data. Years ago vendors argued that sorting the data was the best way to improve performance, and ‘suggested’ that their product be used on a regular basis to maintain this wonderful sorted order. The problems with that idea are that the data can be sorted for one index built against that table, not all and that the data will gradually return to its unsorted order as the end-users process data. This becomes, like CTD, a never-ending cycle of ‘sort, sort, sort, sort, sort’ all under the misguided notion that it’s making things better. It can’t, since every time the data is sorted the production system is rendered unavailable. If the end-users can’t work it doesn’t matter how fast the queries run AFTER the sort process has run. Unfortunately once the DBA has installed such blinders he or she is unaware of the inconvenience such processes create. The end-users need to work, not wait for some DBA to sort a pile of data only to have it return to being an unsorted pile a day or two down the road. The DBA should be working to create meaningful and lasting changes to improve performance; query tuning, plan stability and index analysis are worthwhile tasks that can produce tangible benefits that last far longer than the tasks it took to create them.

There are probably other time-wasters for the DBA that aren’t listed here; knowing what SHOULD be done versus what CAN be done is the hallmark of a great DBA. Looking again at the term introduced in this article should direct you toward useful tuning tasks that provide long-term benefit and that don’t require constant attention or repeated execution. Nothing is perfect, especially your database, so don’t try to make it so because you’ll only end up on The Endless Tuning Treadmill. “Stoptimisation” is what the DBA needs to be concerned with, and that’s not doing the things you don’t need to do. There is plenty of REAL work for the DBA to accomplish day-to-day. Don’t get wrapped up in the minutiae; it’s simply not worth the effort.

See all articles by David Fitzjarrell

David Fitzjarrell
David Fitzjarrell
David Fitzjarrell has more than 20 years of administration experience with various releases of the Oracle DBMS. He has installed the Oracle software on many platforms, including UNIX, Windows and Linux, and monitored and tuned performance in those environments. He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace – and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE, ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the quality of the production system. He has also set up scripts to regularly monitor available space and set thresholds to notify DBAs of impending space shortages before they affect the production environment. These scripts generate data which can also used to trend database growth over time, aiding in capacity planning. He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full recoverability and failover capabilities as well as high availability, and has configured a 'cascading' set of DR databases using the primary DR databases as the source, managing the archivelog transfers manually and montoring, through scripts, the health of these secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance and manage storage and shared memory.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles