Data availability is an important factor to the success of
any businesses. In a distributed environment, Oracle Streams improves data
availability and accessibility by sharing and replicating information to
multiple sites quickly and reliably. Oracle Streams was introduced in Oracle 9i
and has since matured into robust and powerful software is used to perform data
replication, data warehouse transformation and messaging.
This is part one of a two-part series of illustrating the
set up of bi-directional replication using Oracle 10g Streams. Part one
provides the procedures for setting up bi-directional schema-level replication and
conflict resolution between two databases. Part two will demonstrate how a new
source database can be easily added into the replicated environment.
Streams Overview
There are three basic processes of Oracle Streams: Capture, Staging
and Consumption (Apply).
The capture process at the source site captures events such
as DML and DDL from either the online redo logs or archived log files, formats
the changes into Logical Change Records (LCRs) and queues them into a staging
area (queue). The LCRs are then propagated to an apply queue at the destination
site where the changes are eventually dequeued and applied by the apply
process.
In a bi-directional replication set up, each site captures,
propagates and applies changes to the other site, keeping the databases at both
sites current. Each site acts as a source database and as a destination
database. Because simultaneous updates can be performed on the same records by
both sites, conflicts can arise. In the following sections, we will discuss Streams
pre-built update conflict resolution handler and how conflicts can be avoided
or minimized.
Conflict Detection
When an apply process applies row LCR at the destination
database, conflicts are automatically detected when
1. An
update conflict occurs – the same record is updated simultaneously by two sites
resulting in a mismatch of the current column value at the destination database
and the old column value at the source database.
2. A
delete conflict occurs – the record to be deleted does not exist in the
destination database.
3. A
uniqueness conflict occurs – a record inserted in the destination database
results in a unique or primary key constraint violation.
4. A
foreign key conflict occurs – a record inserted or updated in the destination
database results in a foreign key constraint violation.
Ideally, you should design your system to avoid or minimize conflicts.
There are various ways of achieving that such as using unique sequence keys at
each replicated site, horizontal subsetting of data to ensure each site can
only make changes to its own data and avoiding physical deletion of records by logically
marking the records deleted and purging them at a later time.
Conflict Resolution
Oracle only provides the update conflict handler. If the
pre-built conflict handler is insufficient, you can also create a custom
conflict handler to resolve possible conflicts.
The four types of pre-built update conflict handlers are
1. Discard
– Row LCR propagated from the source site is discarded at the destination site
when a conflict is detected.
2. Overwrite
– Row LCR propagated from the source site overwrites the data at the
destination site when a conflict is detected.
3. Minimum
– The column value originated from the source site is compared with the column
value at the destination site. The apply process applies or retains the lower
value at the destination site.
4. Maximum
– The column value originated from the source site is compared with the column
value at the destination site. The apply process applies or retains the higher
value at the destination site.
The type of update conflict handler to select from is driven
by your business rules and requirements. For example, if you designate one site
as the authoritative site, you may want to discard changes propagated from
other sites when a conflict occurs. In this article, we will use the discard
and overwrite methods to resolve conflicts.
Designing your system to avoid and resolving possible
conflicts is one of the most critical aspects for a successful implementation
of bi-directional replication. Detailed information on Conflict Resolution can
be obtained from http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14228/conflict.htm#i1006084
.
Setting Up Bi-Directional Replication
An overview of the Oracle Streams environment:
Host Name |
Instance |
Global |
Database |
trout3 |
ladb |
ladb.world |
Source |
trout4 |
sfdb |
sfdb.world |
Source |
Step 1: Configure archive log mode
Archived redo logs are required by the capture process to
extract changes. Verify both ladb and sfdb are in archive log mode and
configure your log archive destination or flash recovery area.
Step 2: Modify initialization parameters
Parameter |
ladb |
sfdb |
Description |
compatible |
10.2.0 |
10.2.0 |
Both |
global_names |
true |
true |
Database |
job_queue_processes |
2 |
2 |
The |
db_recovery_file_dest_size |
10G |
10G |
Specifies |
db_recovery_file_dest |
/u01/app/oracle/flashdest |
/u01/app/oracle/flashdest |
Specifies |
parallel_max_servers |
6 |
6 |
The You |
sga_target |
500M |
500M |
Enables If |
Step 3: Set up tnsnames.ora
Add the TNS entries on trout3 and trout4.
LADB.WORLD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = trout3)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ladb)
)
)
SFDB.WORLD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = trout4)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = sfdb)
)
)
Step 4: Create tablespaces
Create the tablespaces for the application schema and
Streams administrator.
On ladb
create tablespace appsts datafile '/u02/oradata/ladb/appsts01.dbf' size 100M;
create tablespace streamsts datafile '/u02/oradata/ladb/streamsts01.dbf' size 100M;
On sfdb
create tablespace appsts datafile '/u02/oradata/sfdb/appsts01.dbf' size 100M;
create tablespace streamsts datafile '/u02/oradata/sfdb/streamsts01.dbf' size 100M;
Step 5: Create Streams administrator
Create the Streams administrator in both ladb and sfdb.
create user strmadmin identified by strmadmin default tablespace streamsts temporary tablespace temp; grant dba,select_catalog_role to strmadmin; exec dbms_streams_auth.grant_admin_privilege('strmadmin',true);
Step 6: Create application schema
Create the application schema in both ladb and sfdb.
create user apps identified by apps default tablespace appsts temporary tablespace temp; grant connect,resource to apps; grant select on v_$database to apps;
Step 7: Create application schema’s objects
Create the application schema’s objects in ladb. The objects
in sfdb are created later using Oracle’s Data Pump utility.
connect apps/apps@ladb.world
create table dept (
deptno number(10) not null,
dname varchar2(20) not null,
site varchar2(10) not null
);
create or replace trigger dept_site_trg
before insert on dept for each row
begin
if :new.site is null then
select name into :new.site from v$database;
end if;
end;
/
alter table dept add constraint dept_pk primary key (deptno);
create sequence deptno_laseq start with 1 increment by 5;
create sequence deptno_sfseq start with 2 increment by 5;
insert into dept values (deptno_laseq.nextval,’FINANCE’,’LADB’);
insert into dept values (deptno_laseq.nextval,’HR’,’LADB’);
insert into dept values (deptno_laseq.nextval,’RESEARCH’,’LADB’);
commit;
create table emp (
empno number(10) not null,
ename varchar2(20) not null,
site varchar2(10) not null,
job varchar2(10),
deptno number(10)
);
create or replace trigger emp_site_trg
before insert on emp for each row
begin
if :new.site is null then
select name into :new.site from v$database;
end if;
end;
/
alter table emp add constraint emp_pk primary key (empno);
alter table emp add constraint emp_fk foreign key (deptno) references dept (deptno);
create sequence empno_laseq start with 1 increment by 5;
create sequence empno_sfseq start with 2 increment by 5;
insert into emp values (empno_laseq.nextval,’MIKE’,’LADB’,’CLERK’,1);
insert into emp values (empno_laseq.nextval,’JANE’,’LADB’,’ANALYST’,1);
insert into emp values (empno_laseq.nextval,’ZACH’,’LADB’,’ENGINEER’,11);
commit;
Step 8: Enable table level supplemental logging
Set up supplemental logging as the apps user on ladb.
Supplemental logging logs additional columns information into the redo logs for
row identification.
There are two types of supplemental log groups:
1. Unconditional
supplemental log group – always logs the before images of the specified columns
in the log group whenever any table column is updated.
2. Conditional
supplement log group – only logs the before images of the specified columns in
the log group when at least one column in the log group is updated.
The following adds columns to the conditional supplemental
log group for the emp and dept tables:
alter table dept add supplemental log group loggrp_dept (deptno,dname,site); alter table emp add supplemental log group loggrp_emp (empno,ename,site,job,deptno);
SQL> select log_group_name, table_name
2 from dba_log_groups
3 where owner=’APPS’;
LOG_GROUP_NAME TABLE_NAME
—————————— ——————————
LOGGRP_EMP EMP
LOGGRP_DEPT DEPT
Step 9: Create database link
As the Streams administrator on ladb, create a private
database link to sfdb and on sfdb, create a private database link to ladb.
On ladb
create database link sfdb.world connect to strmadmin identified by strmadmin using 'sfdb.world';
On sfdb
create database link ladb.world connect to strmadmin identified by strmadmin using 'ladb.world';