Oracle’s Automatic Big Table Caching

Oracle offers, in releases 12 and later, an enhancement that may improve performance of ‘large’ tables, Automatic Big Table Caching. The feature automatically caches such tables into an area of the SGA reserved specifically for this purpose. The big table cache won’t ‘work’ for direct path reads; in the example that follows those have been ‘turned off’ for the session to ensure that the table will be loaded into the configured cache. Let’s look at how it’s configured and what it can do for you when it’s active.

Unlike the In-Memory database option, Automatic Big Table Caching is a no additional cost feature; no additional licensing is required to use it. It does require a change to memory settings, which may increase the shared pool/sga/memory settings and could require that additional memory be installed (if the database is heavily used and memory can’t be reallocated to the big table cache from the existing SGA/memory settings). That being said, configuring Automatic Big Table Caching is fairly simple as only one parameter, db_big_table_cache_percent_target, needs to be set. Depending upon the percentage configured this could create a situation where the current sga or memory settings need to be increased; that will depend upon the size of the big table or tables Oracle will be caching. [If the table is too large for the cache then it could be ‘cached’ to disk, and the usage query will reflect that.] Let’s look at a possible configuration and see how it performs.

In this example the cache percent is set to 80 (a rather large number, true, but this is for a small, personal database and the table will be quite large in comparison to the available SGA. Let’s create and populate a table to run an example with:


BING @ quanghoo > 
BING @ quanghoo > create table bigly_bigly_table(
  2  smorgun	     number,
  3  fleester	     varchar2(800),
  4  yupqast	     date);

Table created.

BING @ quanghoo > 
BING @ quanghoo > begin
  2  	     for i in 1..1000000 loop
  3  		     insert into bigly_bigly_table
  4  		     values(i, 'Altoona'||lpad(i, 773,'0'), sysdate+mod(i,991));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

BING @ quanghoo > 
BING @ quanghoo > exec dbms_stats.gather_table_stats(user, 'BIGLY_BIGLY_TABLE', cascade=>true)

PL/SQL procedure successfully completed.

BING @ quanghoo > 

Let’s check on what the database considers a ‘small’ table:


BING @ quanghoo > 
BING @ quanghoo > connect sys/########### as sysdba
Connected.
SYS @ quanghoo > 
SYS @ quanghoo > SELECT a.ksppinm  "Parameter",  c.ksppstvl "Value"
  2  	   FROM   x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
  3  	   WHERE  a.indx = b.indx AND a.indx = c.indx
  4  	    AND   p.name(+) = a.ksppinm
  5  	    AND   UPPER(a.ksppinm) IN ( UPPER('_db_block_buffers'),
  6  					UPPER('_small_table_threshold'));

Parameter                                                                                                                                             
--------------------------------------------------------------------------------                                                                      
Value                                                                                                                                                 
--------------------------------------------------------------------------------
_db_block_buffers                                                                                                                                     
208502                                                                                                                                                
                                                                                                                                                      
_small_table_threshold                                                                                                                                
4170                                                                                                                                                  
                                                                                                                                                      

SYS @ quanghoo > 

The value is in blocks, so any table larger than 4170 blocks will be considered a big table. Let’s now check on the number of blocks our example table occupies:


SYS @ quanghoo > connect bing/##########
Connected.
BING @ quanghoo > set echo on linesize 150 pagesize 100
BING @ quanghoo > column table_name format a35
BING @ quanghoo > alter session set parallel_degree_policy=auto;

Session altered.

BING @ quanghoo > alter session set "_serial_direct_read"=never;

Session altered.

BING @ quanghoo > select blocks from dba_tables where table_name = 'BIGLY_BIGLY_TABLE';

    BLOCKS                                                                                                                                            
----------                                                                                                                                            
    112097                                                                                                                                            

BING @ quanghoo > 

We verify the percent target that was set earlier:


BING @ quanghoo > show parameter DB_BIG_TABLE_CACHE_PERCENT_TARGET

NAME                                 TYPE        VALUE                                                                                                
------------------------------------ ----------- ------------------------------                                                                       
db_big_table_cache_percent_target    string      80                                                                                                   
BING @ quanghoo > 

Now we check to see what the cache shows as being used; since this example has been run more than once the object count is greater than 1:


BING @ quanghoo > select BT_CACHE_TARGET, OBJECT_COUNT, MEMORY_BUF_ALLOC, MIN_CACHED_TEMP from v$bt_scan_cache;

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP                                                                                         
--------------- ------------ ---------------- ---------------                                                                                         
             80            2           170037            1000                                                                                         

BING @ quanghoo > 

Let’s run a count against the table (there are no indexes on the table) and exercise that cache a bit:


BING @ quanghoo > select count(*) from BIGLY_BIGLY_TABLE;

  COUNT(*)                                                                                                                                            
----------                                                                                                                                            
   1000000                                                                                                                                            

BING @ quanghoo > 
BING @ quanghoo > select BT_CACHE_TARGET, OBJECT_COUNT, MEMORY_BUF_ALLOC, MIN_CACHED_TEMP from v$bt_scan_cache;

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP                                                                                         
--------------- ------------ ---------------- ---------------                                                                                         
             80            2           170037            1000                                                                                         

BING @ quanghoo > 

That the object count did not increase indicates that the object is cached, and that Oracle is using it to provide the results we requested. Repeated queries against the table (using full table scans and NOT using direct reads) will provide the same results. Checking the query statistics, we see nothing but consistent gets, which is what is to be expected when using the big table cache for cached objects:


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
     111360  consistent gets
          1  physical reads
          0  redo size
        542  bytes sent via SQL*Net to client
        552  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Running the same query again produces the same results (since the statistics didn’t change those numbers will not be repeated):


BING @ quanghoo > 
BING @ quanghoo > select count(*) from BIGLY_BIGLY_TABLE;

  COUNT(*)                                                                                                                                            
----------                                                                                                                                            
   1000000                                                                                                                                            

BING @ quanghoo > 
BING @ quanghoo > select BT_CACHE_TARGET, OBJECT_COUNT, MEMORY_BUF_ALLOC, MIN_CACHED_TEMP from v$bt_scan_cache;

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP                                                                                         
--------------- ------------ ---------------- ---------------                                                                                         
             80            2           170037            1000                                                                                         

BING @ quanghoo > 

Should another ‘big’ table be accessed (by conventional reads) it would also go into the cache, provided there is enough space in the cache. Again, this is automatically done by Oracle once the big table cache has been configured.

Automatic Big Table Caching can be a performance improvement when large tables are the norm rather than the exception. That it’s also a feature that doesn’t require additional licensing makes it all the more desirable to configure and use.

See all articles by David Fitzjarrell

David Fitzjarrell
David Fitzjarrell
David Fitzjarrell has more than 20 years of administration experience with various releases of the Oracle DBMS. He has installed the Oracle software on many platforms, including UNIX, Windows and Linux, and monitored and tuned performance in those environments. He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace – and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE, ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the quality of the production system. He has also set up scripts to regularly monitor available space and set thresholds to notify DBAs of impending space shortages before they affect the production environment. These scripts generate data which can also used to trend database growth over time, aiding in capacity planning. He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full recoverability and failover capabilities as well as high availability, and has configured a 'cascading' set of DR databases using the primary DR databases as the source, managing the archivelog transfers manually and montoring, through scripts, the health of these secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance and manage storage and shared memory.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles