Oracle Capacity Planning

May 1, 2007

by JP Vijaykumar

Prelude

02:00 AM
Boss:

Reclaim every byte of free space from other tablespaces in the database/ server that are not 100% full and allocate to EMP_TBSP01 tablespace, which is 100% full.

OncallDBA (Oracle):

I need to resize, at least a few dozen datafiles to get 1 GB of free space on the server, to allocate to EMP_TBSP01 tablespace. I have to work at least couple of hours, just to search and reclaim free space from other tablespaces, which are not 100% full and allocate space to EMP_TBSP01 tablespace.

There is no perfect world. There is no perfect application. If I am permitted to say, there is no perfect CAPACITY PLANNING. In this world, we strive hard to achieve near PERFECTION.

Better capacity planning should provide sufficient free space for the database to grow for at least three months down the road. It should give sufficient breathing time for the support team to meet the database’s disk space requirements. If you have to fight for free space everyday, then you are in fire fighting mode.

DISK SPACE ESTIMATED FOR THE DATABASE(RDBMS) ONLY
ESTIMATES ARE BASED ON ASSUMPTIONS, SAMPLING, STATISTICS
ESTIMATES CAN NOT QUANTIFY DISK SPACE REQUIREMENTS IN REAL TIME
ACTUALS ALWAYS VARY FROM ESTIMATES


NOTATIONS: SDB - SOURCE DATABASE (RDBMS/ FLAT FILE DATA/ USER INPUT DATA)
DDB - DESTINATION DATABASE

These estimates do not include the disk space requirements for software installs and other configuration files. These estimates are the disk space requirements for an RDBMS only.

A detailed discussion on capacity planning, dealing in different levels of raid configuration is beyond the scope of this document.

Though this capacity planning document deals with an RDBMS in general, some of the features are specific to Oracle RDBMS.

01.00.00

CAPACITY PLANNING CHECKLIST
(THOUGH NOT EXHAUSTIVE, NEAR EXHAUSTIVE)
(THESE ESTIMATES ARE FOR DDB, EXPRESSED AS % OF SDB)

01.01.00

Identifying Source/Input data and Estimating Disk Space Requirements

01.01.01

Legacy RDBMS(y/n)
What is the current size of legacy db?
(Assuming all the data is loaded into similar tables and indexed)
(200% of SDB)

01.01.02

Flat Files(y/n)
What is the size of data in flat files?
All/Some columns from flat files are loaded into tables and indexed?
What % of columns from flat files loaded into destination tables?
(200 -250% of SDB)

01.02.00

Data Loads(y/n)
Onetime data load – What is the size of data?
(125-200% of SDB)
Daily load/ Incremental load – What is the size of the daily expected data load?
(10-25% of DDB)
User input – What is the size of user input data per day?
(10-20% of DDB)

01.03.00

How is data
migrated from the staging database to Production database?
(Development, Integration, QA and Production environments.)
See Appendix F

01.03.01

Normalized(y/n)
Demoralized source data is loaded into normalized destination database.
What is the estimated reduction in disk space requirements?
Any additional intersection tables/ cross reference tables created?
(75-100% of SDB)
See Appendix F

01.03.02

Denormalized(y/n)
Normalized source data is loaded into denormalized destination database.
What is the estimated increase in disk space requirements?
(150-200% of SDB)
See Appendix F

01.04.00

What is the type of application?

01.04.01

OLTP (y/n)
(Oracle) Add additional space for Temporary tablespace, Undo tablespace, redo log groups and archivelog destination as per requirements.
(125-175% of DDB)
See Appendix A- E

01.04.02

DSS (y/n)
(Oracle) Add additional space for Temporary tablespace, Undo tablespace, redo log groups and archivelog destination as per specific requirements.
(DSS needs a very large temporary tablespace)
(150-175% of DDB)
See Appendix A- E

01.05.00

Estimating the disk space requirements for the indexes to be created in the database.
This is the toughest task in capacity planning. The estimation of disk space requirements for indexes are based on:

  • 01 Type of application (OLTP/DSS)
  • 02 Number of columns indexed per table
  • 03 Types of indexes created.
  • 04 Indexes compressed (y/n)
  • 05 Multi-column /Compound column indexes created (y/n)
    • 0501 Columns used in multi-column indexes multiple times(y/n)

Creating more indexes in OLTP systems will slow down the DML operations. Having more indexes improves query performance in DSS applications. (75-175% of DDB)

01.06.00

Additional disk space requirements for the database. (Oracle) Each instance additionally requires disk space for:
System tablespace
Sysaux tablespace (from Oracle 10g)
Undo tablespace
Temporary tablespace
Usage of temporary tablespace groups (from Oracle 10g)
Redo log groups/members size
(125 - 250% of DDB)

01.07.00(Oracle)

Archivelog destination size is estimated based on:
Average amount of redo generated by the application
Frequency of archivelog backups scheduled to backup the archivelogs and cleanup the archivelog destination.
Add additional disk space to the archivelog destination to take care of archivelog backup failures and the destination getting full.
(A detailed discussion in the matter is beyond the scope of this document)
See Appendix A, B
(20-300% of DDB)

01.08.00

(Oracle) The disk space estimates should be calculated taking into consideration, the PCFREE set for the oracle blocks in the database. The default PCTFREE setting is 10%.
If PCTFREE is increased, under specific requirements, the amount of data that can be inserted into an Oracle block decreases. This will increase the disk space requirement.
(A detailed discussion in the matter is beyond the scope of this document)
See Appendix E

01.09.00

Types of servers / architecture?
Disk space requirements will increase for different server architectures

01.09.01

Stand alone server(y/n)
Single instance running on a stand-alone server.
Disk space for System/Sysaux/ Temp/Undo tablespaces.
Disk space for Redo log groups and archivelog destination.
Please refer to preceding steps 01.04.00 to 01.08.00

01.09.02

(Oracle) RAC – Real Application Clusters (y/n)
Number of nodes in the real application cluster environment
Space for private undo tablespaces for each node
Space for redo log threads for each node
Space for archivelog destination for each node
Please refer to preceding steps 01.04.00 to 01.08.00

01.09.03

Replicated environment(y/n)
Number of replicated sites in the environment.
Additional disk space requirements to take care of resynchronization of sites in replication. Please refer to preceding steps 01.04.00 to 01.08.00

01.09.04

Standby (Physical) (y/n)
Primary and standby databases require equal amount of disk space.
Additional disk space requirements to accommodate archivelogs and transport to standby site.
Please refer to preceding steps 01.04.00 to 01.08.00

01.09.05

Backups
Allocate sufficient disk space requirements for database backup based on these factors
Frequency of backups that are scheduled on your database?
Where the backups of your database are stored?
How many days worth of backups are stored in the backup destination?
(If this limit is unlimited backups, then you need to have alternate storage).
In any case, accumulating backup files for a long time does not serve any purpose.

01.10.01

Capacity planning, as stated earlier, is based on statistics and projections.
In reality, your database may consume the disk space intended for a year's growth in just a month's time.
To handle calamities like this, as added insurance, add 20 - 50% of additional disk space to your final estimates.

APPENDIX

Appendix A

(Oracle) Disk space requirements for archivelog destination:

Allocate disk space for the archivelog destination of your database, based on

  • Average amount of redo generated by the application
  • Frequency of archivelog backups scheduled to backup the archivelog files and clean up the destination.
  • Add extra % of disk space to accommodate additional archivelogs in the destination:
    • against backup failures
    • heavy data loads into the database
  • Writing archivelogs to multiple locations on the same server

Appendix B

(Oracle) Write intensive application generate heavy redo. Estimate disk space for accommodating the archivelogs generated, until they are backed up and the archivelog destination is cleaned up. Applications that handle intermittent data loads need sufficient disk space to take care of the sudden surges in heavy redo generation due to heavy data loads. The archiver process cannot write the archivelogs in the archivelog destination, when the archivelog destination is full. If the archiver process stops writing archivelogs, the database will hang.

Appendix C

(Oracle) Allocate sufficient disk space, if you plan to hold a day’s worth of database backups and archivelog backups on the same server, where the database is running. This will reduce the time for restoring complete database backups, in the event of database crashes and performing a complete database recovery. Restoring backups from tapes to disk is comparatively slow.

Some of the best practices (Oracle).

  • Retain the latest full backup on the server, where your database is running
  • Backup older files onto tapes
  • Retain at least 15 days worth of backups on tape. If you require more retention period for backups, you need more tape for holding backups
  • At least, once in every 45 days, restore your backups onto a different server and recover your database. This is to make sure that your backups are readable and you are not in for surprises, when there is a disaster.

Appendix D

Add any additional disk space requirements for:

  • holding flat files for loading data into database.
  • holding the import / export dump files of the database.
  • maintaining multiple versions of software homes (ORACLE_HOME) on the server.

Appendix E

(Oracle) Pctfree is the percentage of space reserved by Oracle in each block. This space in Oracle block is never used for inserting data; it is reserved for the future expansion of rows. The default pctfree is 10%. For specific requirements, the pctfree is increased from the default 10%. When the pctfree parameter is increased, the number of rows that can be inserted into an oracle block decrease. An increase in the pctfree parameter will boost up the disk space requirements of the database proportionately.

Appendix F

When migrating data from the staging area to development/integration/production:

Does any Normalization / Demoralization take place in the data migration(y/n)

  1. Find the size of the source database.
  2. Add expected growth projections to arrive at the disk space requirements.
  3. Add any expected overhead.
  4. Add disk space requirements for additional redo log groups
  5. Add additional disk space requirements for overhead of archivelog destination.

REFERENCES

Oracle document ‘Bitmap Index vs. B-tree Index : Which and When?’ by Vivek Sharma








The Network for Technology Professionals

Search:

About Internet.com

Legal Notices, Licensing, Permissions, Privacy Policy.
Advertise | Newsletters | E-mail Offers