Database Journal
MS SQL Oracle DB2 Access MySQL PostgreSQL Sybase PHP SQL Etc SQL Scripts & Samples Links Database Forum

» Database Journal Home
» Database Articles
» Database Tutorials
MS SQL
Oracle
DB2
MS Access
MySQL
» RESOURCES
Database Tools
SQL Scripts & Samples
Links
» Database Forum
» Sitemap
Free Newsletters:
DatabaseDaily  
News Via RSS Feed


follow us on Twitter
Database Journal |DBA Support |SQLCourse |SQLCourse2
 

Featured Database Articles

Oracle

Posted November 4, 2011

Using Autoextend to Increase Oracle Database File Sizes

By David Fitzjarrell

Autoextend can be very helpful to busy DBAs by allowing Oracle to automatically increase file sizes to a pre-determined limit when necessary, which can make the workday a bit easier by minimizing the effort expended to monitor disk space. In a heavily used system where inserts and updates eclipse the deletes, autoextend gives the DBA some "breathing room" so he/she can attend to more pressing issues. For those unfamiliar with using autoextend, we'll show how to enable it and how to manage the available space.

Setting a file in a tablespace to autoextend is a fairly easy task as long as the user performing the actions has DBA or SYSDBA privileges:

SQL> alter database datafile '/whackenhut/smackdoodle/endoplasm/flurst01.dbf' autoextend on maxsize 2000M;

Database altered.

SQL>

Not every datafile or tempfile in a tablespace needs to be set to autoextend – you can select one or more datafiles/tempfiles and leave others unaltered. The size limit, if left unset in the 'alter database' command, is 32G, which is the same value set by Oracle by setting MAXSIZE to UNLIMITED.

You have autoextend on for one or more datafiles/tempfiles; how do you monitor the size? The DBA_DATA_FILES/DBA_TEMP_FILES views provide almost all of the necessary information (what's missing is the db_block_size, and that's fairly easy to find):

SQL> select file_name, bytes, maxbytes,
2 increment_by*(bytes/blocks) "INCREMENT",
3 maxbytes-bytes remaining,
4 (maxbytes-bytes)/(increment_by*(bytes/blocks)) EXTENSIONS
5 from dba_data_files
6 where autoextensible = 'YES'
7 /

FILE_NAME BYTES MAXBYTES INCREMENT REMAINING EXTENSIONS
-------------------------- -------- --------- --------- --------- ----------
/d909/data/users01.dbf 52428800 314572800 10485760 262144000 25

SQL>
SQL> select file_name, bytes, maxbytes,
2 increment_by*(bytes/blocks) "INCREMENT",
3 maxbytes-bytes remaining,
4 (maxbytes-bytes)/(increment_by*(bytes/blocks)) EXTENSIONS
5 from dba_data_files
6 where autoextensible = 'YES'
7 /

FILE_NAME BYTES MAXBYTES INCREMENT REMAINING EXTENSIONS
------------------------------------------------------ ------------ ---------- --------- ---------- ----------
+DATA/pspqa09/datafile/system.274.764412479 734003200 3.4360E+10 10485760 3.3626E+10 3206
+DATA/pspqa09/datafile/sysaux.275.764412499 629145600 3.4360E+10 10485760 3.3731E+10 3216
+DATA/pspqa09/datafile/undotbs1.276.764412513 214958080 3.4360E+10 5242880 3.4145E+10 6512
+DATA/pspqa09/datafile/psp_data01.281.764412629 3221225472 3.4360E+10 8192 3.1138E+10 3801086
+DATA/pspqa09/datafile/psp_idx01.283.764412761 536870912 3.4360E+10 8192 3.3823E+10 4128766
+DATA/pspqa09/datafile/users.284.764412775 5242880 3.4360E+10 1310720 3.4354E+10 26210

6 rows selected.

SQL>

Notice the current size, the maximum size, the incremental value, space remaining and the extensions available are reported, to give you an idea of how much room is left to allocate to your autoextend datafile. You could execute this query on a daily basis to see how quickly or slowly your datafile is growing, then plan for either modifying the maxsize or adding another datafile. And all of this presumes you have sufficient disk space for the remaining extensions to occur; monitoring autoextend tablespaces also means monitoring the disk space at the operating system level or from your ASM instance to ensure that you don't "overextend" yourself. The queries shown above can be 'automated' with cron to report, on a set schedule, how disk space is being used. The report can even be emailed to you:

#!/bin/ksh
#
#
# Variables
#
USER="blorpo"
PASS="bleebo"
DB="mydb"

SQLPLS=`which sqlplus`
MAILLIST="bob@ed.com"
FILE="report/datafile_rpt_`date "+%m%d%Y"`.txt"

if [ ! -d ./report ]
then
mkdir report
fi

$SQLPLS /nolog < EOF
connect $USER/$PASS@$DB
@report_qry $FILE
EOF

if [ -s $FILE ]
then
mailx -s "Datafile Space Report" $MAILLIST < $FILE
fi

Simply put the queries in a script named report_qry.sql and spool the output to a file (using the &1 positional parameter) and you can generate a dated report on a schedule, having it emailed right to your desk. The report_qry.sql script would look like this:

set pagesize 0 feedback off linesize 4000 trimspool on termout off set echo off verify off

spool &1

select file_name, bytes, maxbytes,
increment_by*(bytes/blocks) "INCREMENT",
maxbytes-bytes remaining,
(maxbytes-bytes)/(increment_by*(bytes/blocks)) EXTENSIONS
from dba_data_files
where autoextensible = 'YES'
/

select file_name, bytes, maxbytes,
increment_by*(bytes/blocks) "INCREMENT",
maxbytes-bytes remaining,
(maxbytes-bytes)/(increment_by*(bytes/blocks)) EXTENSIONS
from dba_data_files
where autoextensible = 'YES'
/

spool off

Changing the report is simply changing the query or queries in report_qry.sql.

Speaking of ASM how would you monitor the remaining disk space? It's a straightforward query against the V$ASM_DISKGROUP view:

SQL> select name, total_mb, free_mb, cold_used_mb, hot_used_mb, usable_file_mb
2 from v$asm_diskgroup
3 where name is not null
4 /

NAME TOTAL_MB FREE_MB COLD_USED_MB HOT_USED_MB USABLE_FILE_MB
------------------------------ -------- ------- ------------ ----------- --------------
DATA 102400 87964 14436 0 87964

SQL>

The above query can also be 'automated' using the example above.

What's the difference between HOT_USED_MB and COLD_USED_MB? It has to do with the disk region where the data lies. ASM allows the DBA to implement Intelligent Data Placement; disk regions can be specified as HOT and ASM will leverage the disk geometry to find the fastest sectors to access and put often accessed data in those sectors to improve performance. By default a diskgroup is considered COLD; adding a HOT template to the diskgroup prepares it for disk reassignment:

SQL> ALTER DISKGROUP data ADD TEMPLATE datafile_hot
2 ATTRIBUTE (
3 HOT
4 MIRRORHOT);

Diskgroup altered.

SQL>

This will work if the COMPATIBLE.RDBMS parameter is set to 11.2.0.0.0 or higher; if you happen to have both a 10g and 11g database using your 11.2 ASM instance executing the above command will display:

SQL> ALTER DISKGROUP data ADD TEMPLATE datafile_hot
2 ATTRIBUTE (
3 HOT
4 MIRRORHOT);
ALTER DISKGROUP data ADD TEMPLATE datafile_hot
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15283: ASM operation requires compatible.rdbms of 11.2.0.0.0 or higher

SQL>

For illustrative purposes we'll presume we have the proper COMPATIBLE.RDBMS setting; the diskgroup is now prepared for this:

SQL> ALTER DISKGROUP data MODIFY FILE '+data/orcl/datafile/users.259.679156903'
2 ATTRIBUTE (
3 HOT
4 MIRRORHOT);

Diskgroup altered.

SQL>

ASM will now manage your data (presuming you're not using a storage array using striped volumes) and locate the 'hot' data in the HOT region. [Storage arrays with LUNs composed of stripes hide the disk geometry from ASM so it's harder to determine where the HOT zone should be.]

Can you set a datafile or tempfile, already configured to autoextend, to have a maximum size? Certainly, and it's done with the same command used to turn autoextend on with a maximum size. To change the setting for the file we set at the beginning of this post we would:

SQL> alter database datafile '/whackenhut/smackdoodle/endoplasm/flurst01.dbf' autoextend on maxsize 2200M;

Database altered.

SQL>

and, voila!, it will now extend to 2200 M instead of the 2000 M we originally set as its limit. This won't succeed if you're trying to set MAXSIZE smaller than the current file size and you have data that would be lost.

Of course one can always turn off autoextend:

SQL> alter database datafile '/whackenhut/smackdoodle/endoplasm/flurst01.dbf' autoextend off;

Database altered.

SQL>

Over the years (and through many trenches) I've come to look at autoextend as a useful tool in a DBA's tool belt, mostly because of improvements in storage technology and improvements in the autoextend mechanism. I don't set all the files in a tablespace to autoextend, but having at least one configured to automatically expand can keep a busy DBA from receiving 2 A.M. calls because batch transasctions are failing due to a lack of space. Whether you use the filesystem or ASM to manage your storage autoextend can make DBA life easier (relatively speaking) so you can tackle that performance issue with the report which uses a 17-table, multi-million record join that absolutely, positively has to run in less than 2 minutes although it disables the use of every available index with function calls on indexed columns and a fistful of LIKE conditions which have wildcards on both ends. Did I forget to mention the ORDER BY?

At least with autoextend, you might not run out of space in TEMP before it returns data.

David Fitzjarrell has more than 20 years of administration experience with various releases of the Oracle DBMS. He has installed the Oracle software on many platforms, including UNIX, Windows and Linux, and monitored and tuned performance in those environments. He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace – and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE, ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the quality of the production system. He has also set up scripts to regularly monitor available space and set thresholds to notify DBAs of impending space shortages before they affect the production environment. These scripts generate data which can also used to trend database growth over time, aiding in capacity planning.

He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full recoverability and failover capabilities as well as high availability, and has configured a 'cascading' set of DR databases using the primary DR databases as the source, managing the archivelog transfers manually and montoring, through scripts, the health of these secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance and manage storage and shared memory.

See all articles by David Fitzjarrell



Oracle Archives

Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 




Latest Forum Threads
Oracle Forum
Topic By Replies Updated
Oracle Data Mining: Classification jan.hasller 0 July 5th, 07:19 AM
Find duplicates - Unique IDs Lava 5 July 2nd, 08:30 AM
no matching unique or primary key rcanter 1 April 25th, 12:32 PM
Update values of one table based on condition of values in other table using Trigger Gladiator 3 February 29th, 06:01 PM


















Thanks for your registration, follow us on our social networks to keep up-to-date