Moving from Oracle Solaris to x86

Oracle has decided that Solaris needs to retire, giving those who are running Oracle on Solaris roughly two years to move to another operating system. A possible choice involves commodity hardware (x86) and Linux. Since OEL is a ‘ready to use’ version of Red Hat Linux it would be a reasonable presumption that many companies will be choosing it as a Solaris replacement. Available as a full release for a bare-metal system and as a pre-configured VM it’s positioned to be a ‘go-to’ option for migration.

Solaris, on SPARC, is a big-endian operating system, where the most-significant byte is stored last. Linux, on the other hand, is a little-endian system where the bytes are stored in the order of significance (most-significant is stored first). This can make moving to Linux from Solaris a challenge, not for the software installation but for the database migration as the existing datafiles, logfiles and controlfiles can’t be used directly; they must be converted to the proper endian format. Time and space could be important issues with such conversions/migrations as there may be insufficient disk space to contain two copies of the database files, on big-endian and on little-endian. The actual conversion shouldn’t be difficult as RMAN can convert the files from a recent backup. Other considerations include access to the new servers (since they may not be on the production LAN while the build and conversions are taking place) and replicating the current Oracle environment, including scripts, NFS mounts, other shared file systems and utilities, from Solaris to Linux. RMAN isn’t the only method of migration as expdp/impdp can be used to transfer users, tables and data from the source database to its new home; using datapump means the destination database can’t be rolled forward after the import has completed, which can cause a longer outage than the enterprise may want to endure since all user access to the application data must be stopped prior to the export.

Looking at the ‘worst-case scenario’ let’s proceed with the migration using Oracle Enterprise Linux and the datapump transfer of data and tables. (Installing the software should be a fairly simple task so that portion of the migration will not be covered here.) Once the server is running and Oracle is installed the first step in this migration is to create an “empty” Oracle database as the destination. This takes care of the endian issue since the new datafiles are created with the proper endian format. If the Linux server is created with the same file system structure as the source then it’s simply a matter of performing the import and checking the results to verify nothing went awry. Any tnsnames.ora files that applications use for connectivity need to be modified to point to the replacement database server and remote connections need to be tested to verify they do, indeed, work as expected. After successful application testing has completed the database and its new ‘home’ should be ready to replace the original Solaris machines.

It’s possible that the new Linux server will use a different storage configuration; it’s also possible that the DBA team, in an effort to simplify tasks, decides to use Oracle Managed Files. In either case the import may not succeed since datapump can’t create the datafiles. This is when the tablespaces need to be created ahead of the datapump import; using the SQLFILE parameter to datapump import will place all of the DDL into the specified file. Once this file is created it can be edited to change the file locations or to use OMF when creating tablespaces. It’s critical that file sizes are sufficient to contain the data and allow for growth. Datapump will create users if the necessary tablespaces exist so the only DDL that should be necessary to run prior to the import will be the CREATE TABLESPACE statements; all other DDL should be removed prior to running the script. It’s also possible to create a DBCA template to create the destination database by modifying an existing template. Creating the database manually or with a template is a DBA team decision; in the absence of database creation scripts that can be modified it might be a good decision to modify an existing template to minimize errors.

Presuming the storage configuration for the x86 system is different from that on Solaris, and that the file system structure doesn’t match the source server, the tablespace DDL has been extracted, modified and is ready to run. After the tablespaces are in place the import can be run; it is always good practice to make a test run into the new database before the final import is executed, to verify that the process runs smoothly. The final import should put the new database in ‘proper working order’ for the applications that use it so all insert/update/delete activity must be stopped prior to the export. This will ensure data consistency across the tables.

One area that may be an issue is application account passwords in the new database. It’s a good idea to verify that the passwords from the source database work in the new database. If not, they can be reset to the current values; end-users and batch jobs will be happier if logins are successful.

Connectivity to remote systems is also critical, so any tnsnames.ora files that are in use on the source system need to be copied to the destination system and database links need to be tested. This may involve the security and system administration teams to open ports, set firewall rules and ensure that any software packages not included with the operating system are installed. There should be no surprises once the new server and database are brought online.

Other choices, such as Unified Auditing or Database Vault, that require an Oracle kernel relink need to be discussed and decided upon before the destination database is created. Re-linking the Oracle kernel before any database exists reduces overall downtime for the migration.

No document can cover every conceivable issue or problem that may arise so it’s possible the first pass at moving from Solaris to x86 may reveal issues that weren’t obvious at the beginning of this process. It may also take more than one pass to “iron out” all of the “kinks” to get a smoothly running process. The time and effort expended to ensure a smoothly running migration will pay off handsomely when the day finally arrives.

Moving from Solaris to x86 (or any other operating system) may not be at the top of your wish list, but it will become a necessity when Solaris is no longer supported. Getting a start on this now will provide the needed time to fix any issues the migration may suffer so that when the fateful weekend arrives the DBA team is ready to provide a (mostly) painless move.

See all articles by David Fitzjarrell

David Fitzjarrell
David Fitzjarrell
David Fitzjarrell has more than 20 years of administration experience with various releases of the Oracle DBMS. He has installed the Oracle software on many platforms, including UNIX, Windows and Linux, and monitored and tuned performance in those environments. He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace – and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE, ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the quality of the production system. He has also set up scripts to regularly monitor available space and set thresholds to notify DBAs of impending space shortages before they affect the production environment. These scripts generate data which can also used to trend database growth over time, aiding in capacity planning. He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full recoverability and failover capabilities as well as high availability, and has configured a 'cascading' set of DR databases using the primary DR databases as the source, managing the archivelog transfers manually and montoring, through scripts, the health of these secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance and manage storage and shared memory.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles