Oracle Upgrades to 12.2.0.1

With the recent release of Oracle version 12.2.0.1 some shops will be planning and executing upgrades; part of this may be the availability of expanded JSON functionality, APEX improvements and online table management such as moves and partition splits. Careful planning is crucial to a successful upgrade, including object management tasks that may not seem important but can send an upgrade ‘sideways’. Let’s look at those possibly overlooked items.

Depending upon which release the upgrade starts from the path may become longer than expected, traveling through more than one Oracle release before finally ending up at 12.2.0.1. Upgrades from versions lower than 11.2.0.3 will necessitate getting to release 11.2.0.3 or 11.2.0.4 before proceeding further; it shall be presumed that the database is at either of those two releases for this discussion.

The obvious place to begin is with the pfile or spfile; deprecated parameters should be removed or commented to prevent them from being set. The following list shows all but one of the parameters deprecated by Oracle for releases up to and including 12.2.0.1:


active_instance_count
background_dump_dest
buffer_pool_keep
buffer_pool_recycle
commit_write
cursor_space_for_time
fast_start_io_target
global_context_pool_size
instance_groups
lock_name_space
log_archive_start
max_enabled_roles
parallel_automatic_tuning
parallel_io_cap_enabled
parallel_server
parallel_server_instances
plsql_debug
plsql_v2_compatibility
rdbms_server_dn
remote_os_authent
resource_manager_cpu_allocation
sec_case_sensitive_logon
serial_reuse
sql_trace
standby_archive_dest
user_dump_dest

The missing parameter is sec_case_sensitive_logon and is unsupported in what Oracle calls ‘Exclusive Mode’, a reference to how the SQLNET.ALLOWED_LOGON_VERSION_SERVER parameter is set (if it’s set at all) in the sqlnet.ora file. By default, it’s set to 12 so that only version 12 clients can connect to the database. It can be set to values as low as 8 (to ensure backward compatibility for older applications) and it also affects accounts with a password version less than 11G. For compatibility reasons case-insensitive passwords are required for some older applications or applications that access both a 12.x database and an older release. The pre-upgrade tool provided by Oracle will report such issues, so they can be addressed prior to the upgrade. The command to run the tools is:


$ORACLE_HOME/jdk/bin/java -jar {Oracle 12.2 home}/rdbms/admin/preupgrade.jar FILE TEXT DIR {directory to write the output to}

The $ORACLE_HOME variable, and the corresponding environment, must be set to the current, pre-12.2 Oracle home for the database to be upgraded. The tool is run from the 12.2 home. That utility will generate the following scripts and directories:


dbms_registry_basic.sql         postupgrade_fixups.sql          preupgrade.log                  upgrade
dbms_registry_extended.sql      preupgrade_driver.sql           preupgrade_messages.properties
oracle                          preupgrade_fixups.sql           preupgrade_package.sql

The preupgrade.log has all of the information necessary to prepare the database for upgrade. The .sql scripts generated tell you what they are for and the log will report which scripts to run to prepare the database. [For those who are curious, setting SQLNET.ALLOWED_LOGON_VERSION_SERVER to 8 ‘turns off’ Exclusive Mode so accounts with a password version of 10G will still be accessible; not setting SQLNET.ALLOWED_LOGON_VERSION_SERVER when accounts have a 10G version password (such as those accounts created by a legacy application) will render them no longer accessible.] There may be post-upgrade actions to take (such as upgrading the database timezone) which will be reported once the postupgrade_fixups.sql script has run.

Invalid objects can interfere with an upgrade in such a way as to cause ORA-00600 errors to be generated. To be honest it is a special case that can cause this: views, materialized views, packages, functions and procedures that use database links that no longer exist can send an upgrade into a downward spiral. Obviously since these are already invalid and cannot be successfully compiled it’s highly likely they are not being used; another case however, is when a test, dev or UA database is created from a production database and such database links are dropped for security reasons. It will still be necessary to find these objects (utlrp.sql will report them in the DBA_ERRORS table) so the DDL can be preserved and the objects dropped prior to starting the upgrade. After the upgrade is successfully completed these objects can be recreated, although they will still be invalid.

The dba_recyclebin MUST be purged prior to starting an upgrade, as the upgrade process accesses the data dictionary for the database undergoing the upgrade and recyclebin objects can stop an upgrade or cause the database to throw unexpected errors after the upgrade has apparently completed successfully.

Unused columns are also an issue as the data dictionary keeps track of them and sets their position in the table to 0; the upgrade process doesn’t check for a position value of 0, it simply checks the dictionary for columns that may need to be upgraded because of features that have been selected. Unlike invisible columns that CAN be used after they are set to INVISIBLE, dropped columns have no useful names and cannot be recovered. If developers have set columns to UNUSED then they are really, truly gone and need to be dropped. A query of DBA_UNUSED_COL_TABS will provide a list of the owners, tables and number of columns marked UNUSED. It’s a simple task to use that view to create a script to rid those tables of their unused baggage:



select 'alter table '||owner||'.'||table_name||' drop unused columns;'
from dba_unused_col_tabs

spool drop_unused.sql
/
spool off

spool drop_unused_cols.log
set echo on
@drop_unused
set echo off
spool off

So the upgrade won’t fail.

Backup the database BEFORE starting an upgrade; should the upgrade fail, the database can be restored into its original home so that the business isn’t without it during the ‘investigate the failure’ phase. Do not ignore this step; it has saved many a DBA and enterprise from losing a database to a failed upgrade.

It is no longer possible to run the catupgrd.sql script directly from the SQL> prompt; it’s a Java script that is called as shown below:


$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl $ORACLE_HOME/rdbms/admin/catupgrd.sql

which runs in a mixture of serial and parallel modes; each step, and the mode used, are reported at the terminal screen during the upgrade. Sample output is shown below:


------------------------------------------------------
Phases [0-115]         Start Time:[2017_09_21 13:59:45]
------------------------------------------------------
***********   Executing Change Scripts   ***********
Serial   Phase #:0    [{sid}] Files:1    Time: 146s
***************   Catalog Core SQL   ***************
Serial   Phase #:1    [{sid}] Files:5    Time: 54s
Restart  Phase #:2    [{sid}] Files:1    Time: 0s
***********   Catalog Tables and Views   ***********
Parallel Phase #:3    [{sid}] Files:19   Time: 18s
Restart  Phase #:4    [{sid}] Files:1    Time: 0s
*************   Catalog Final Scripts   ************
Serial   Phase #:5    [{sid}] Files:6    Time: 22s
*****************   Catproc Start   ****************
Serial   Phase #:6    [{sid}] Files:1    Time: 22s
*****************   Catproc Types   ****************
Serial   Phase #:7    [{sid}] Files:2    Time: 21s
Restart  Phase #:8    [{sid}] Files:1    Time: 1s
****************   Catproc Tables   ****************
Parallel Phase #:9    [{sid}] Files:69   Time: 35s
Restart  Phase #:10   [{sid}] Files:1    Time: 1s
...

When the upgrade completes, the total time is reported and the names and locations of the log file and summary report are displayed. These files need to be checked before moving on to any post-upgrade activities as any errors encountered or generated during the upgrade will be reported in these files; no such output is displayed on the terminal during the upgrade process. If the steps recommended by the preupgrade tool have been followed it’s very unlikely that the upgrade will fail; obviously there is the odd chance that something in the database configuration specific to an application can cause problems but the ‘odds’ ase in the DBA’s favor if the preupgrade instructions are followed.

Upgrading to version 12.2.0.1 is different from previous upgrades but it doesn’t have to end in tragedy. Using the preupgrade tool (as Oracle recommends), following the instructions provided by that tool and addressing the issues listed here will go a long way toward making a 12.2.0.1 upgrade a success.

See all articles by David Fitzjarrell

David Fitzjarrell
David Fitzjarrell
David Fitzjarrell has more than 20 years of administration experience with various releases of the Oracle DBMS. He has installed the Oracle software on many platforms, including UNIX, Windows and Linux, and monitored and tuned performance in those environments. He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace – and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE, ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the quality of the production system. He has also set up scripts to regularly monitor available space and set thresholds to notify DBAs of impending space shortages before they affect the production environment. These scripts generate data which can also used to trend database growth over time, aiding in capacity planning. He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full recoverability and failover capabilities as well as high availability, and has configured a 'cascading' set of DR databases using the primary DR databases as the source, managing the archivelog transfers manually and montoring, through scripts, the health of these secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance and manage storage and shared memory.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles