The mutating table error in Oracle can be a familiar sight throughout a DBA's career, along with the usual question of 'Why?'. The error itself is not difficult to understand nor is it hazardous to the table data. Let's look at why the error is thrown, why it's not going to create data problems and how to possibly fix it.
The first thing to realize is the table isn't actually mutating; the error is thrown because of the read consistency mechanism Oracle employs to ensure against 'dirty reads', which are reads of uncommitted changes. For all but the session performing the data modifications Oracle will use redo blocks to reconstruct the data image as of the starting time of the query. This prevents any uncommitted changes from being read. For the session performing the modifications the uncommitted changes are visible, but only after the modifications have completed. Insert 10 rows into a table, then query the count and you'll see there are 10 more rows than when you started. Every other session sees only the committed results. The mutating issue surfaces when a regular trigger attempts to modify the same table that was modified by the driving insert, update or delete statement. Since Oracle allows the modifying session to see its own changes Oracle tries to execute the trigger but fails when another DML statement tries to change incompletely modified data. Since the insert/update/delete can't complete until the trigger successfully executes, and executing the trigger would pile changes on top of changes in the middle of a transaction, Oracle doesn't allow it and throws the error.
In a promoted video on YouTube the following statement is made:
"... that you are breaking this rule and stop your table data and schema objects from corruption."
Trying to modify data that is in the middle of another modification operation won't corrupt it since Oracle won't allow such updates to occur. Depending on the type of trigger involved Oracle will either try to modify existing data before the requested modifications are processed or Oracle will process the insert/update then try to modify the resulting data before the transaction ends. In either case since an active modification is in progress Oracle won't allow a second modification from the same session to process on the incompletely altered data. Were this safety mechanism not in place it would be possible to 'mangle' table data into an irreversible state and thus 'corrupt' it, however schema object changes are effected through data definition language (DDL) change, which are preceded by a commit and followed by one. DML, such as insert, update and delete statements, cannot corrupt schema object structure.
Getting around such an error can be done in two ways: using a compound trigger if you're using Oracle 11g or later, or by using the autonomous_transaction pragma. The following example shows how to code such a trigger to avoid the ORA-04091 error:
create or replace procedure set_comm(p_empno in number, p_sal in number, p_oldsal number)
if p_sal >= p_oldsal*1.10 then
set comm = p_sal*.10
where empno = p_empno;
create or replace trigger after_emp_updt
after update of sal on emp
for each row
set_comm(:new.empno, :new.sal, :old.sal);
The trigger is simple, and uses a procedure declared to execute in an autonomous transaction. Autonomous transactions occur in a new process spawned from the calling process, and are thus subject to the standard read consistency mechanism Oracle employs. The procedure can perform its work outside of the original session that fired the trigger, getting around the ORA-04091 error entirely.
The compound trigger operates a bit differently, doing all of its table access before the triggering event causes one of the sections to fire, thus there is no in-process data update to interrupt. Let's look at an example from a previous article to see how this is done:
SQL> create or replace trigger check_raise_on_avg
2 for update of sal on emp
3 COMPOUND TRIGGER
4 Twelve_Percent constant number:=0.12;
6 -- Declare collection type and variable:
8 TYPE Department_Salaries_t IS TABLE OF Emp.Sal%TYPE
9 INDEX BY VARCHAR2(80);
10 Department_Avg_Salaries Department_Salaries_t;
11 TYPE Sal_t IS TABLE OF Emp.Sal%TYPE;
12 Avg_Salaries Sal_t;
13 TYPE Deptno_t IS TABLE OF Emp.Deptno%TYPE;
14 Department_IDs Deptno_t;
16 BEFORE STATEMENT IS
18 SELECT AVG(e.Sal), NVL(e.Deptno, -1)
19 BULK COLLECT INTO Avg_Salaries, Department_IDs
20 FROM Emp e
21 GROUP BY e.Deptno;
22 FOR j IN 1..Department_IDs.COUNT() LOOP
23 Department_Avg_Salaries(Department_IDs(j)) := Avg_Salaries(j);
24 END LOOP;
25 END BEFORE STATEMENT;
27 AFTER EACH ROW IS
29 IF :NEW.Sal - :Old.Sal >
32 Raise_Application_Error(-20000, 'Raise too large');
33 END IF;
34 END AFTER EACH ROW;
35 END Check_Raise_On_Avg;
SQL> select empno, sal from emp;
14 rows selected.
SQL> update emp set sal=sal*1.10 where empno = 7369;
1 row updated.
SQL> select empno, sal from emp;
14 rows selected.
Updates occur even though the trigger is also selecting data from the table being modified, all possible by how Oracle executes compound triggers.
The mutating table error is inconvenient but it really isn't a danger to your data or your schema objects. It's a read consistency problem that can be overcome by using autonomous transactions or compound triggers. Knowing that should make the DBAs life a little bit easier.
See all articles by David Fitzjarrell