Using LogMiner - Part 2 - Striking Out - Page 2
May 11, 2005
The $64,000 Question
We know we can recover from bad or incorrect DML statements. According to the documentation, can you recover from an errant DDL statement? Drop a table? No problem getting that back - sort of. Here is the big question: can you recover from a drop user username cascade? Consider the following scenario. You are working on a development database and need to transfer data from a production environment into a development environment so that your development database has more realistic data in it. You need to take the current development database and make a new one-off version of it. Because of the way the application works, you cannot perform a straightforward export/import. The application requires that you run a wizard-like utility on the new version of the development database prior to the import. Further, you do not have file system access to the production database's data files, so you are stuck with the export dump file.
You start by exporting the current development database's data so you can import it into the new one-off version of it. You perform the import, but you did not know about the wizard requirement, so you drop the schema owner using the cascade clause so you can start over with a clean database. A scheduled backup has since overwritten your one copy of the export dump file. Can you recover what was just dropped in the new database?
No one cares about the old development database except for a user who put his training class data into it. Although everyone is of the opinion "why did he do that in the first place, too bad for him," the fact remains that he (and you) need to recover the lost/dropped schema. It wasn't your problem to start with, but it is now.
What are your options? Flashback is of no use because it only deals with DML statements, not DDL ones. There is no other backup because the intent of the original development database was that it is a throwaway database. You are not running in archive log mode for the same reason. You do, however, happen to have three 100MB redo logs that have not cycled or overwritten one another. Can LogMiner bail you of the dropped user situation?
The first thing you want to do is put a screaming halt on anything and anyone else using the database. Shut it down, and shut it down now. Take a cold backup and put it somewhere safe. Moreover, this is a case where you do want to include the redo logs as part of the copy. Taking the cold backup is prudent in that you will not or cannot be any worse off than you already are.
What is in the redo logs when Scott drops a table? Using a simple drop table emp by Scott should give you an appreciation of the obstacles that lay ahead of you.
Here is a quick run through of Scott dropping his emp table and you (knowing a fairly exact time so as to narrow down the amount of data returned) using a prepared script to fire up LogMiner and see what you can retrieve.
Note: Part of the table was cut off for display purposes
SQL> select * from emp; EMPNO ENAME JOB ---------- ---------- --------- 7369 SMITH CLERK 7499 ALLEN SALESMAN 7521 WARD SALESMAN 7566 JONES MANAGER 7654 MARTIN SALESMAN 7698 BLAKE MANAGER 7782 CLARK MANAGER 7788 SCOTT ANALYST 7839 KING PRESIDENT 7844 TURNER SALESMAN 7876 ADAMS CLERK 7900 JAMES CLERK 7902 FORD ANALYST 7934 MILLER CLERK 14 rows selected. SQL> drop table emp; Table dropped. SQL> select * from emp; select * from emp * ERROR at line 1: ORA-00942: table or view does not exist
Scott alerts you about the dropped table and he can narrow down the time to a 15-second window.
execute dbms_logmnr_d.build ('dictionary.ora', - 'C:\ora9i\oradata\db00', - options => dbms_logmnr_d.store_in_flat_file); execute dbms_logmnr_d.set_tablespace('logmnr_ts'); alter database add supplemental log data (primary key, unique index) columns; exec dbms_logmnr.add_logfile( - logfilename => 'C:\ORA9I\ORADATA\DB00\REDO01.LOG', - options => dbms_logmnr.new); exec dbms_logmnr.add_logfile( - logfilename => 'C:\ORA9I\ORADATA\DB00\REDO02.LOG', - options => dbms_logmnr.addfile); exec dbms_logmnr.add_logfile( - logfilename => 'C:\ORA9I\ORADATA\DB00\REDO03.LOG', - options => dbms_logmnr.addfile); exec dbms_logmnr.start_logmnr( - dictfilename => 'C:\ora9i\oradata\db00\dictionary.ora', - starttime => to_date('04-May-2005 22:50:00','DD-MON-YYYY HH24:MI:SS'),- endtime => to_date('04-May-2005 22:50:15', 'DD-MON-YYYY HH24:MI:SS'));
With all of the above successfully run, you can query v$logmnr_contents.
Seeing only "drop table emp;" - especially if you were expecting to see all of the SQL_UNDO statements lined up and ready for cut and paste, and don't forget there may be hundreds of thousands of rows of data involved - has got to make your jaw drop and your heart race a little bit. If you recall the 5th bullet in the potential uses list, truth in advertising is in effect here as you can plainly see that the drop table DDL statement has been recorded.
Don't give up just yet. Surely, there must be something in the redo logs that can be used. Alter the query a bit and you get the following:
The query returns almost 30 rows, and the first eight return the columns of the table (you can see EMPNO, ENAME and JOB in the screenshot). So far, this does not look too bad or incomprehensible. What is not obvious is the work that went into making the output "pretty." Start from the top and insert the undo statements one at a time.
You may find that several of the more obscure insert statements result in 0 rows updated. I am not a database forensic expert, but a "0 rows updated" would give me pause considering this is supposed to be an undo type of statement.
After running through the set of LogMiner generated undo statements, let's connect as Scott and see what the results are.
There is nothing quite like an ORA-00600 error to cap off the evening while you are sweating bullets trying to mine around 500,000 rows worth of data across 80 tables. I believe I was precise in editing the generated undo statements as there were no syntax errors - the results of each statement were either 1 or 0 rows updated or inserted.
With the Scott case, we can always drop the user with cascade and start over - or can we?
With ORA-00600 errors, you can use MetaLink's ORA-00600 lookup utility. After doing DML on sys tables with the undo statements, maybe you have an easy fix, maybe not.
Some Conclusions about LogMiner
It is impractical to try to recover any significant amount of data.
Simple DML statements - the one row type of mistake - are easily undone.
Multiple rows affected by the same error are not too hard to correct.
Recovering from a simple DDL statement error virtually guarantees a "game over" end state for the novice, average and above average DBA. Unless you have significant experience with DML on SYS-level objects, it is not worth the risk to your database (assuming there are usable schemas remaining within). Although the undo information is present in some fashion, using it is not explicitly supported. The documentation could point that out in a better manner.
The scenario described earlier reminds me of a line from the movie "The Recruit" when Al Pacino is dispensing advice to the trainees ("Rule number one: don't get caught."). If you are working on an unfamiliar database, before you start doing anything else, ensure you have a good backup. To you, the database may be a piece of junk, but to someone else, it represents days or weeks worth of work. Although you may have the sympathy of other people (developers and managers who share the opinion that development databases were never meant to store anything real), the bottom line is that there is valuable data that must be recovered. Even though the errant user bears some responsibility for the situation in the first place, you are the DBA and you are the one who looks foolish for being complacent and being caught without a backup. You know you know better than to get caught like that, so don't let it happen.