Database Journal
MS SQL Oracle DB2 Access MySQL PostgreSQL Sybase PHP SQL Etc SQL Scripts & Samples Tips Database Forum

» Database Journal Home
» Database Articles
» Database Tutorials
MS Access
Database Tools
SQL Scripts & Samples
» Database Forum
» Slideshows
» Sitemap
Free Newsletters:
News Via RSS Feed

follow us on Twitter
Database Journal |DBA Support |SQLCourse |SQLCourse2

Featured Database Articles


Posted September 26, 2011

WEBINAR: On-demand webcast

How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER >

PGA and Memory Management in Oracle

By David Fitzjarrell

Several years back, Database Journal published an article on PGA management parameters in the then-recent 9i release. We're now at 11.2 and a number of changes have been made that can make the PGA (and memory management in general) easier for the DBA. Let's look at what parameters are new since 9i and see how they work and what they can and cannot do.

9i introduced pga_aggregate_target, giving the DBA the ability to set a soft limit for pga memory allocations. It's a soft target because it's a target value to strive for, not a maximum to limit expansion. Oracle can, and will, exceed this target when necessary. How do you know the database has gone beyond your set target? Query V$PGASTAT:

SQL> select * From v$pgastat;

---------------------------------------------------------------- ---------- ------------
aggregate PGA target parameter 2684354560 bytes
aggregate PGA auto target 1589538816 bytes
global memory bound 268431360 bytes
total PGA inuse 918226944 bytes
total PGA allocated 1894816768 bytes
maximum PGA allocated 3727641600 bytes
total freeable PGA memory 112263168 bytes
process count 201
max processes count 253
PGA memory freed back to OS 1.4060E+12 bytes
total PGA used for auto workareas 0 bytes
maximum PGA used for auto workareas 2237966336 bytes
total PGA used for manual workareas 0 bytes
maximum PGA used for manual workareas 531456 bytes
over allocation count 0
bytes processed 9.4060E+12 bytes
extra bytes read/written 4.8103E+11 bytes
cache hit percentage 95.13 percent
recompute count (total) 562008

19 rows selected.


Notice the pga_aggregate_target value, reported on the first line of output, then look at the maximum PGA allocated since the instance started; they don't match and probably won't as Oracle will not starve process memory if resources are available. But what's that second line reporting, the 'aggregate PGA auto target' value? That's the size allocated for the automatic work areas, when workarea_size_policy is set to AUTO. And that brings us to another new aspect of PGA management, the automatic workareas, not available in 9i but present in 10g and later releases. When workarea_size_policy is set to AUTO (the two settings are AUTO and MANUAL) the sort area and hash area for each session are automatically managed by Oracle, which doesn't mean that sort_area_size and hash_area_size are ignored. Those two settings control the minimum size of each area for automatic work areas; setting workarea_size_policy to MANUAL restores their historic functionality as the absolute size each area has. Letting Oracle manage such areas is a plus in my book as these areas (for sorting and hashing) are sized according to need so that large sorts and hash operations rarely spill to disk, reducing physical I/O.

11g introduces yet another pair of parameters, memory_target and memory_max_target, designed to simplify memory management even further. Setting one or both of these parameters eliminates the need to set the other target/max parameters (pga_aggregate_target, sga_target, sga_max_size). Setting one will set the other if you don't explicitly set both. Okay, that may be a bit confusing so let's look at an example:

SQL> show parameter memory

------------------------------------ ----------- ------------------------------
hi_shared_memory_address integer 0
memory_max_target big integer 1520M
memory_target big integer 1520M
shared_memory_address integer 0

As shown above both memory_max_target and memory_target are set to the same value; this is due to memory_target being explicitly set in the spfile (I know this because I set it for this particular instance). They can be set differently if you have plenty of memory to spare but usually memory_target is the only parameter of the two that's set. I can hear the next question coming: "How do I know how to set memory_target for my instance?" That question is answered with the V$MEMOR_TARGET_ADVICE view:

SQL> select * from v$memory_target_advice order by memory_size;

----------- ------------------ ------------ ------------------- ----------
1140 .75 1893 1.0018 0
1520 1 1890 1 0
2280 1.5 1888 .9992 0
2660 1.75 1888 .9992 0
3040 2 1888 .9992 0


The entry where MEMORY_SIZE_FACTOR equals 1 shows the instance setting currently in force, in megabytes. Using the ESTD_DB_TIME column can help tune the memory_target size, as the lower the number in this column the better the workload performance is likely to be. Since the values change so little in this example it's not worth the effort to increase the memory_target setting. Had the value been in the ten-thousands where MEMORY_SIZE_FACTOR equals 1 and in the thousands for larger values, it would be prudent to adjust the memory_target setting up to the recommended value in the spfile/pfile and restarting the instance (memory_target is not a dynamic parameter).

Setting memory_target also triggers the mechanism first provided in 9i by the sga_target parameter, automatic buffer pool management, allowing Oracle to swap around memory between dynamic areas of the SGA and the various buffer pools. Such a mechanism allows the database to provide memory resources 'on demand', reducing the occurrence of swap usage which reduces unwanted physical I/O in and out of the swap area by keeping sorts and hash activity in memory.

What if you don't want automatic memory management at that level? Set memory_target to 0 and set sga_target and pga_aggregate_target to the desired values and you're back to 10g memory management. Personally I prefer setting the memory_target parameter and monitoring the results through the provided views: v$sgastat, v$pgastat, v$memory_target_advice, v$pga_target_advice and v$sga_target_advice. I use the last two views to see where these related values are set by Oracle; usually these values will increase as memory_target increases, although through dynamic memory management these can increase to the limits imposed by the current memory_max_target parameter setting. Oracle does its best to keep things running smoothly in the memory area and I've had no issues or memory-related errors running heavy loads through

Oracle has done, in my opinion, an excellent job of simplifying instance-level memory management for the DBA, creating an architecture that allocates and deallocates memory between the SGA, PGA and the various buffer pools to provide resources when needed. The databases I have are all using memory_target and have workarea_size_policy set to AUTO which releases me from memory management issues and allows me to concentrate on more pressing performance-related tasks such as query tuning.

Now, if Oracle would only implement the "_go_really_fast" parameter ...

David Fitzjarrell has more than 20 years of administration experience with various releases of the Oracle DBMS. He has installed the Oracle software on many platforms, including UNIX, Windows and Linux, and monitored and tuned performance in those environments. He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace – and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE, ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the quality of the production system. He has also set up scripts to regularly monitor available space and set thresholds to notify DBAs of impending space shortages before they affect the production environment. These scripts generate data which can also used to trend database growth over time, aiding in capacity planning.

He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full recoverability and failover capabilities as well as high availability, and has configured a 'cascading' set of DR databases using the primary DR databases as the source, managing the archivelog transfers manually and montoring, through scripts, the health of these secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance and manage storage and shared memory.

See all articles by David Fitzjarrell

Oracle Archives

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Latest Forum Threads
Oracle Forum
Topic By Replies Updated
Oracle Data Mining: Classification jan.hasller 0 July 5th, 07:19 AM
Find duplicates - Unique IDs Lava 5 July 2nd, 08:30 AM
no matching unique or primary key rcanter 1 April 25th, 12:32 PM
Update values of one table based on condition of values in other table using Trigger Gladiator 3 February 29th, 06:01 PM