Creating an Oracle Patching Strategy

Patching is a necessity, and CPU updates can include patches some installations may not need. This ‘all or nothing’ approach may create issues that didn’t exist prior to the patch, which is why it’s recommended to backup both the database and the software home before making any changes. Of course, if a patch needs to be backed out, it’s two restores — the database home and the database — can take considerable time. Let’s look at a possible solution to this problem.

If sufficient planning is done ahead of time a tar or pax archive of the software home can be made before any changes occur; this provides a clean slate, if you will, for either restoring the original software to the home or for creating an exact replica under a different directory. The time and labor savings are realized if that archive is used to create a second home, one that patches can be applied to without disrupting the daily workflow. All that is required is to register this new home in the registry and set a single session to use it. The OUI can register new homes with a single-line command:

./runInstaller -silent -invPtrLoc $ORACLE_HOME/oraInst.loc -attachHome ORACLE_HOME=”/oracle/12c2/stage_apr2018″ ORACLE_HOME_NAME=”Ora12c2StagePtchHome”

Execute an opatch lsinventory after the home has been added to ensure that the installer recognizes the new home, otherwise any patching attempt will fail:

$ OPatch/opatch lsinventory
Oracle Interim Patch Installer version
Copyright (c) 2018, Oracle Corporation.  All rights reserved.

Oracle Home       : ...
Central Inventory : /export/home/oracle/oraInventory
   from           : ...
OPatch version    :
OUI version       :
Log file location : ...

Lsinventory Output file location : ...

Local Machine Information::
Hostname: ...
ARU platform id: 23
ARU platform description:: Solaris Operating System (SPARC 64-bit)

Installed Top-level Products (1):

Oracle Database 12c                                        
There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.


OPatch succeeded.

The above output proves the home is configured in the inventory and all is in order to proceed with patching.

In one session logged in as ‘oracle’ set the ORACLE_HOME variable to the location of this new Oracle home — a possible naming convention is to use the SID of the database in the path and the month and year of the CPU being applied, such as /u01/app/oracle/product/bubba_apr_2018/dbhome_1. This readily identifies the home as having the current CPU applied to it. Other naming conventions can also be used; it is a good idea to use a convention that clearly identifies this as a patched home, so it can be distinguished from the original, unpatched version.

Verify that the version of OPatch is the version required to apply the CPU — it may be necessary to download and install a newer release of the utility (this is available at To preserve the old version, move the OPatch directory to OPatch_old, then unzip the archive into the new home.

Applying the CPU is quite easy in such a configuration — the destination database is still running from its original home, so no processes need to be stopped. Verify that the home being patched is the correct one (opatch reports this prior to asking for approval to proceed) and then continue as expected. The new home will be patched at the software level; the database level patching is left for the scheduled outage taken to complete the task.

Any additional patching or any recovery patches that need to be applied should follow immediately upon completion of the CPU. Again, since there is no database yet running in this new home no work is disrupted and any number of patches can be applied without affecting the end users. Once patching is complete some files will need to be copied to the new Oracle home; the dbs, netwoerk/admin and hs/admin directories should be copied from the existing home to the new home to ensure no functionality is lost in the move. For those not using extproc the original $ORACLE_HOME/hs/admin directory will not need to be copied.

Patching is now complete and a new, patched home awaits its database. On the day of the switch simply stop all services and processes running from the original home, change the oratab file to reference the new database home, properly set the environment and start the database. Check the alert log for any unexpected messages (there should be none) and proceed with executing $ORACLE_HOME/OPatch/datapatch -verobose. This process can take a bit of time so be patient — progress will be displayed on the terminal as each section of the patch is applied to the data dictionary objects.

When the datapatch utility has finished its tasks it’s time to start the listener and test local and remote connectivity. Also check if any extproc functionality is missing in this new installation. In 12.1 and later releases extproc is no longer configured in the listener.ora file; it uses the extproc.ora file located under $ORACLE_HOME/hs/admin (which explains the note a bit earlier in the article). When all looks good the patched database can be released to a waiting user community.

Occasionally the unthinkable happens and the patch creates issues not previously experienced, requiring that it be backed out. With a configuration described here such a backout takes little effort — shutdown the running database, restore the original home location in the oratab file, reset the environment to use the unpatched home then restore the database via RMAN, recover to just before the patch and open the database. All pieces of the patch are now gone and no tedious patch backout procedures were necessary.

Presuming the patch was successful the original home can be backed up to tape and then deleted. If a ‘backout’ occurred, then the patched home can be removed without any backup necessary. Space is reclaimed for the next CPU endeavor, if it’s determined that the CPU is necessary to fix any current issues.

This may appear to be a rather unorthodox way to apply CPU patches, but it may also save time and effort in the long run by allowing the DBA team to patch a database home without disrupting the workflow. An hour or two to finish the database patching is far less time than patching an existing home given that backups (both operating system and database) need to be taken and planning must include considerable time to restore the original software from the tape archives in the event of a patching failure. It’s also a fairly ‘clean’ method since no remnants may be left behind from overlaying the old software on top of the patched home (new files for the patched version won’t be removed by restoring the original software to the patched location).

Consider this method before applying the next CPU; it will likely save time, money and resources better used for more pressing projects.

See all articles by David Fitzjarrell

David Fitzjarrell
David Fitzjarrell
David Fitzjarrell has more than 20 years of administration experience with various releases of the Oracle DBMS. He has installed the Oracle software on many platforms, including UNIX, Windows and Linux, and monitored and tuned performance in those environments. He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace – and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE, ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the quality of the production system. He has also set up scripts to regularly monitor available space and set thresholds to notify DBAs of impending space shortages before they affect the production environment. These scripts generate data which can also used to trend database growth over time, aiding in capacity planning. He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full recoverability and failover capabilities as well as high availability, and has configured a 'cascading' set of DR databases using the primary DR databases as the source, managing the archivelog transfers manually and montoring, through scripts, the health of these secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance and manage storage and shared memory.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles