The new server using Red Hat is up and running, Oracle RDBMS software has been installed, a new instance is running with all of the new features you can possibly imagine (at long last, run_fast=true is available), and “all” that’s left to do is migrate the source database (or schema) to the new/target database. It is midnight Friday and you have an 8-hour planned outage/maintenance period available to perform the migration. What can you do prior to midnight Friday to make the migration as painless as possible? Let’s look at four areas where planning can make a difference: general administration, the export phase, the import phase, and a handoff phase.
General administration/project planning
You are the one in charge of the database migration. What do you and Hannibal from the A-Team have in common? Answer: “I love it when a plan comes together.” To help make the plan come together, fire up Visio or PowerPoint and diagram a workflow process. As a minimum, you can take the low-tech route and come up with a timeline. Even if you start with brainstorming and writing down ideas as they come to mind, you will be much better off having everyone on the same sheet of music (or, “One band, one sound”). Items to consider include:
- Diagramming the workflow/process, coordination meetings
- Assign responsibilities among team members, establish roles and responsibilities
- Create and distribute a contact list, include how to get in touch with other key personnel (managers, system administrators, testing, developers, third party/application providers, customers, account managers, etc.)
- Hours of operation for Starbucks (some of them open an hour later on Saturdays)
- After hours building access for contractors (meet at a designated place and time?)
- Janitorial services – do they alarm the building/office when they are done? There is nothing like an alarm going off, as you walk down the hall, to add a little excitement to the evening.
- Notification to security/police regarding after hours presence (“Really Officer, we work here, we’re not just sitting here looking like we work here”)
- Establishing a transfer point on the file system and ensuring there is enough disk space for the export
- Acquiring a complete understanding of schema changes (how and when key tables get altered/modified, to include data transformation processes)
- Establish a work schedule (does every DBA need to be present the entire time, or can schedules be staggered?)
Pre-export and export phase
Aside from a shortage of time, there is very little to prevent you (or the person in charge of export) from practicing the export several times over and ensuring there are no glitches in this part of the plan. Does the export have to be a one-step/export everything from A to Z process? How about phasing the export by functional groups? Consider breaking up the export into functional groups: support tables, main tables, altered tables, and historical/static tables.
By grouping tables in this manner, you can interleave export and import. Once the export of a group is complete, you can start its corresponding import. It may take two hours to export and four hours to import, but that does not mean it takes six consecutive hours. Why is there a time difference between export and import? Export and import are not one to one. Export will run quite a bit faster than import, and both can run faster if optimized a bit. Do not forget that indexes are not being exported. Indexes will be re-built after the data is loaded in the target database (see previous article regarding use of the indexfile).
How are you driving the exports: interactive mode or use of shell scripts and parameter files? Shell scripts should have four key features:
- An interview process
- Feedback or a summary of what was entered
- Existence checks (includes parameter files, ability to write to the dump and log file locations, and database connectivity)
- Bail out mechanisms (“Do you want to continue?”) after key steps or operations
One script can drive the entire export process, and the bail out points can be used as signals (accompanied by extensive use of echo statements which denote where you are in the process). A key metric to be determined while practicing and refining the scripts is that of the time it takes to perform all exports.
If a schema migration is taking place (as opposed to a full database migration), what are the dependencies among schemas? Look for names/items such as build_manager, process_logger, and stage (more germane to a warehouse). “Build_manager” (as an example of a name) may contain common or public functions, procedures and packages. Process_logger may be the owner of process logs for all schemas (fairly common if you see “pragma autonomous_transaction” in the text of a source; it is a way of capturing errors during failed transactions). Unless the new schema incorporates these external or associated schemas, some or all of these otherwise “Left Behind” schemas need to be accounted for in the target database.
While the export is taking place, what is happening with the non-exported schemas? You may need to disable connections, change passwords, disable other processes, and suspend crons while the export is taking place. Web applications connections tend to be like crabgrass (i.e., hard to kill), and an effective way of stopping them is to change a password. Finally, what is the disposition of the source database, that is, assuming your plan comes together?
Practice creating schemas and associated physical/logical objects such as tablespaces and datafiles. End result desired here is no ORA-xxxxx errors whatsoever, and all create scripts should be re-runnable. With respect to import parameter files, ensure fromuser marries up to touser. Using what was gleaned from the indexfile, pre-create tables in the target database.
For tables undergoing a modification, questions to ask include where, when and how does that take place? Do the changes occur within the user’s schema, or within a temporary or migration schema, followed by “insert into new version of table as select from temp table?”
Fully understand how major tables are being changed – you may take for granted what appear to be ash and trash “not null” constraints, but application changes may completely rely upon them. In other words, it may not be enough to take care of PK, FK and unique constraints when trying to rebuild a table on the fly because there was some hiccup in the process.
What about cron and database jobs? How are you migrating/exporting all of those? Something which frequently goes hand in hand with cron jobs is email. Is the new server configured for email notification? Are there any database links to create?
Do you need logging turned on while the import is taking place? Is it even necessary to log everything being imported? What about triggers, especially the “for each row” kind? Millions of rows inserted via import equals millions of times one or more triggers fired on a table with that kind of trigger. If the trigger on a table back on the source database already took care of formatting a name, does it need to be fired again during an import?
You can be clever and disable quite a few automatic functions to help speed up the import, but don’t be too clever by half, that is, do not forget to re-enable whatever it is you disabled. At 5:30 in the morning, having worked all day Friday (in addition to coming back at 11 to get ready for the midnight starting gun), sleep deprivation can introduce a significant amount of human error. If you have to go off your game plan, have someone double check your work or steps, especially if the object being manipulated is of key importance to a database or schema.
Post import considerations
Did everything work? Breathe a sigh of relief, but the job is not finished. What are you using for a baseline backup, once everything is up and running after the migration? Are you transitioning from export/cold/hot backups to RMAN? Has RMAN backup and recovery been practiced yet?
Plan A, obviously, is a success from start to finish. However, despite all best intentions and planning, what is Plan B? What if, for some undeterminable reason, applications fail to work properly after the migration? Thorough testing minimizes this, but what if no large scale testing took place? What does it take to revert to the source database? Do you have the time to try again? A second attempt will not take as long assuming you trust what took place in the export(s).
Do not assume everyone knows or understands what just took place. For example, do customer support personnel know how to point desktop CRM applications to the new database? Or are they opening trouble tickets a few weeks after the fact to complain how their changes are not being made or are not taking effect? What may be blindingly obvious to you as a DBA may be completely obscure to people who don’t speak “database.”
The tips and steps covered in this article are based on real events, places, and persons. I have personally witnessed the customer service rep complaining about how his changes were not showing up, and it was because he had no idea whatsoever about pointing his desktop CRM application to the new database. Was that the DBA’s or his manager’s responsibility to propagate that information to him? I have seen key tables have problems with the insertion of transformed data and workarounds such as “create table as select” from the stage or transformation table implemented, but alas, the stage table did not have all of the not null constraints as did the new “real” table, and there goes the Web application down the drain.
The sad truism about a database migration is that if you do not have the time to test beforehand and wind up failing (the reason why is immaterial), it is amazing how time magically appears to perform testing before the second attempt. The tips mentioned in this article should give you a good perspective regarding some the external factors which come into play during a migration.