Leaving the History Behind with Oracle's Fine Grained Access Control
January 24, 2003Marin Komadina
Oracle has not solved the problem of exporting all but a few larger tables from a user schema thus far. This leaves DBAs to find their own workarounds when necessary. This article covers the three most used solutions.
In a case where the production table data has been deleted, an export file can be used to quickly populate small tables, leaving the big historical tables to be fixed later with a transportable tablespaces solution or by importing them from separated export file.
Some solutions have suggested creating large tables with different schemas, importing, exporting, duplicating, moving etc. I have found this solution to be too haphazard for some bussiness oriented companies.
For a clever DBA, there are several possible solutions:
As a test case, let's assume we have created a small Oracle database 220.127.116.11 on Solaris.
Database Version: Oracle8i Enterprise Edition Release 18.104.22.168.0 - Production With the Partitioning option JServer Release 22.214.171.124.0 - Production Schema ARTIST : TABLE_NAME COUNT(*) --------------------------------------------- BACKUP_STATUS 411 CRITICAL_SEGMENTS 0 INPUT 10,000 PLAN_TABLE 13 TRANSFER 10,000 TS_STORAGE 25,334
a. ) Full export
. about to export ARTIST's tables via Conventional Path ... . . exporting table BACKUP_STATUS 411 rows exported . . exporting table CRITICAL_SEGMENTS 0 rows exported . . exporting table INPUT 10000 rows exported . . exporting table PLAN_TABLE 13 rows exported . . exporting table TRANSFER 10000 rows exported . . exporting table TS_STORAGE 25334 rows exported
The full export that we have just produced is one solution to our problem. You have just to send it, together with log file to developers.
Here is small tip regarding export. It is possible to make a selective table export with parameter tables.
$exp parfile=artist.par userid=system/manager Example of parameter file: $ cat artist.par buffer=8388608 file=artist_full.dmp log=artist_exp.log consistent=y tables=(artist.backup_status,artist.critical_segments,...)
This allows you to mark up to a maximum of 99 table names for export in the form owner.table_name inside parameter file. In our case, we have 996 tables that we want to export. (We don't want the 4 large ones). It would be hard work to write all of the names in a list, and to make several export files with several parameter files. For that reason, we are going to solution b.)
b.) Data dictionary view modification
For this next solution, I'll remind you of the oft repeated warning from Oracle.
"You should not change datadictionary objects without supervision from Oracle."
That is absolutely correct; but we're still going to do it. Our operation is not that critical, because we are only going to change the definition of one datadictionary view: the one responsible for table selection during export. Be sure to make a backup before you start.
CREATE OR REPLACE VIEW SYS.EXU81TABU ... from exu81tab WHERE ownerid = uid and NAME NOT IN ('INPUT','TRANSFER');
8: view name - SYS.EXU81TABU
After modification, let's run an export of test user ARTIST:
. about to export ARTIST's tables via Conventional Path ... . . exporting table BACKUP_STATUS 411 rows exported . . exporting table CRITICAL_SEGMENTS 0 rows exported . . exporting table PLAN_TABLE 13 rows exported . . exporting table TS_STORAGE 25334 rows exported
We have succeeded in getting rid of the large tables INPUT and TRANSFER. The export file is consistent. As a check, you can run import to test recovery scenarios.
All of this, but yet, there is still a better solution.