Leaving the History Behind with Oracle’s Fine Grained Access Control


Marin Komadina

Oracle has not solved
the problem of exporting all but a few larger tables from a user schema thus
far. This leaves DBAs to find their own workarounds when necessary. This
article covers the three most used solutions.

  • Full export
  • Data dictionary view modification
  • Fine Grained Access Control ( FGAC)

Scenario A:

The DBA needs to export
schema tables, without the static history data. The user schema has approximately
1000 tables; four tables are static and include historic partitioned data. These
four tables need to be skipped in the export file.

Scenario B:

Recovery strategy,
with daily export, has to be abandoned because it takes too long. Exporting the large history tables consumes most of
the export time.

In a case where the
production table data has been deleted, an export file can be used to quickly
populate small tables, leaving the big historical tables to be fixed later with
a transportable tablespaces solution or by importing them from separated export
file.

Some
solutions have suggested creating large tables with different schemas,
importing, exporting, duplicating, moving etc. I have found this solution to be too
haphazard for some bussiness oriented companies.

For a clever DBA,
there are several possible solutions:

  • full schema export and send everything to developers, saying that is only way possible
  • play with fire, modifying the database catalog view
  • do a professional job and show others that you know what Fine Grained Access Control is

Test Case

As a test case, let’s
assume we have created a small Oracle database 8.1.7.4 on Solaris.

  • User schema for this test will be user ARTIST
  • ARTIST has a few test tables
  • User ARTIST has DBA rights, just to make the test simpler.
  • The same logic applies to any other user you choose to export.

Database Version:
Oracle8i Enterprise Edition Release 8.1.7.4.0 – Production
With the Partitioning option
JServer Release 8.1.7.4.0 – Production

Schema ARTIST :

TABLE_NAME COUNT(*)
———————————————
BACKUP_STATUS 411
CRITICAL_SEGMENTS 0
INPUT 10,000
PLAN_TABLE 13
TRANSFER 10,000
TS_STORAGE 25,334

Solutions

a. ) Full export

. about to export ARTIST’s tables via Conventional Path …
. . exporting table BACKUP_STATUS 411 rows exported
. . exporting table CRITICAL_SEGMENTS 0 rows exported
. . exporting table INPUT 10000 rows exported
. . exporting table PLAN_TABLE 13 rows exported
. . exporting table TRANSFER 10000 rows exported
. . exporting table TS_STORAGE 25334 rows exported

The full export that we
have just produced is one solution to our problem. You have just to send it,
together with log file to developers.

Here is small tip regarding
export. It is possible to make a selective table export with parameter tables.

$exp parfile=artist.par userid=system/manager

Example of parameter file:

$ cat artist.par
buffer=8388608
file=artist_full.dmp
log=artist_exp.log
consistent=y
tables=(artist.backup_status,artist.critical_segments,…)

This allows you to mark
up to a maximum of 99 table names for export in the form owner.table_name
inside parameter file. In our case, we have
996 tables that we want to export. (We don’t want the 4 large ones). It would be hard work
to write all of the names in a list, and to make several export files with
several parameter files. For that reason, we are going to solution b.)

b.)
Data dictionary view modification

For this
next solution, I’ll remind you of the oft repeated warning from Oracle.

“You
should not change datadictionary objects without supervision from Oracle.”

That is absolutely
correct; but we’re still going to do it. Our operation is not that critical,
because we are only going to change the definition of one datadictionary view:
the one responsible for table selection during export. Be sure to make a backup
before you start.


CREATE OR REPLACE VIEW SYS.EXU81TABU

from exu81tab WHERE ownerid = uid
and NAME NOT IN (‘INPUT’,’TRANSFER’);

Version
8: view name – SYS.EXU81TABU
Version 7: view name – SYS.EXU7TAB

After modification, let’s
run an export of test user ARTIST:


. about to export ARTIST’s tables via Conventional Path …
. . exporting table BACKUP_STATUS 411 rows exported
. . exporting table CRITICAL_SEGMENTS 0 rows exported
. . exporting table PLAN_TABLE 13 rows exported
. . exporting table TS_STORAGE 25334 rows exported

We have succeeded
in getting rid of the large tables INPUT and TRANSFER. The export file is
consistent. As a check, you can run import to test recovery scenarios.

All of
this, but yet, there is still a better solution.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles