Coarse vs. Fine-Grained Striping
When a new disk group is created, ASM will spread files in
1MB allocation unit (AU) chunks across all of the disks in a disk group.
This default method, called coarse striping, eliminates the need for
manual disk tuning. When striping is completed, all disks in the same ASM disk
group should have about the same size and performance characteristics; this
helps to insure that optimal I/O is attainable. For most installations, only a
small number of disk groups should be sufficient (e.g., one disk group
designated as a work area and one for a recovery area). Thus,
even though the number of files and disks may increase, a constant number of
disk groups will probably have to be managed.
Most ASM files stored in the ASM instance will perform well
with normal (coarse) striping. However, Files like online redo logs, flashback
logs, and control files typically require low latency. For these file
types, ASM provides fine-grained (128K) striping. In this case,
each AU is striped, and this tends to break up medium-sized I/O operations into
many smaller I/O operations that will be executed in parallel.
ASM failure groups provide mechanisms to prevent the
failure of a set of disks within a single particular disk group that share
a common resource whose failure must be tolerated. A typical example of
this concept is a series of disks connected to a single, common disk
controller. In this situation, if the controller failed, all the other disks on
the controller's common bus would also be unavailable, even though all of the
individual disks are still working fine. (Note that what constitutes a failure group tends to be site-specific and will
depend upon the failure level(s) that a site can accept.)
By default, ASM assigns each disk to its own failure group.
The DBA can, of course, select a different disk grouping when creating a disk
group or adding a disk to a disk group. Once a set of failure groups are identified,
ASM will tend to optimize file layout so that the possibility of data
unavailability because of the failure of a shared resource is reduced.
Disk Group Mirroring
ASM provides three types of disk group mirroring
possibilities. The first type, external redundancy, does not utilize
failure groups at all and thus provides no mirroring. This type of
redundancy is best when either the DBA has decided to rely upon external
OS-level hardware mirroring, or when data loss is an acceptable risk due to
media failure. Normal-redundancy disk groups are the ASM default, and
they support two-way mirroring that insures two copies of a file
extent will always exist on the disk group. Finally, high-redundancy
disk groups support triple mirroring, which insures that there will
always be three copies of each file extent on the disk group.
When ASM allocates a new primary extent for a file to
one disk in a disk group, it allocates a mirror copy of that extent to
another disk on one of the several available "partner" disks in the
same disk group. Because ASM ensures that a primary extent and its mirror copy
never reside in the same failure group, ASM can thus tolerate simultaneous
failures of multiple disks in a single failure group. In addition, since
ASM mirrors file extents instead of disks, ASM requires spare
capacity only within that disk group, not among all disks.
When a disk fails and all disks eventually do! - ASM reads
the mirrored contents from the surviving disks and then automatically rebuilds
the complete contents of the disk that has failed onto the surviving disks in
the disk group. This also has the advantage of spreading out the I/O hit
from the recovery from a disk failure across several disks.
Disk Group Rebalancing
ASM automatically rebalances a disk group whenever
disks are added or dropped. Therefore, the rebalancing process requires no
intervention. Because ASM uses special indexing techniques to distribute
extents over all available disks in a disk group, re-striping of the data is
not required; instead, ASM only moves enough data proportionally versus the
amount of storage that's been added or removed. This evenly redistributes the
files and keeps a balanced I/O load across all disks in the disk group.
Oracle recommends that, to avoid unnecessary data movement, it
is generally more efficient to add or drop multiple disks simultaneously. This
tends to insure that the disks will be rebalanced in a single operation.
Creating New Disk Groups
On to some practical examples! I have already set up some
additional simulated disks on my Linux server, so I will next create a new ASM
disk group via the CREATE DISKGROUP
SQL command. See Listing
3.3 for the sample code I used.
Adding New Disks
If I want to add one or more ASM disks to an existing
ASM disk group, I can use the ALTER
DISKGROUP ADD DISK command. ASM performs the addition of ASM
disks to an ASM disk group in a single operation. I can add specific ASM disks
to a disk group by name, or I can choose to use a disk discovery
string. The nice thing about the disk discovery string is that if ASM
detects ASM disks that are already part of an ASM disk group in the folder
specified by that string, the disks will be ignored and only those ASM disks
that are not yet in use will be added.
When I add an ASM disk to a disk group, the ASM instance
will determine if the disk is addressable and usable; if so, then the ASM disk
is formatted and rebalanced. Rebalancing tends to be
time-consuming because it moves extents from every file in the ASM disk
group onto the new ASM disk. Though an ASM rebalancing operation does not block
any ongoing database operations, rebalancing will definitely have at
least some impact on the I/O load on the system. I can also tell the ASM
instance to expend more resources on the rebalancing by increasing the value of
3.4 for an example of adding a new disk to an existing disk group while
requesting a higher than normal amount of resources for rebalancing the
affected ASM disk group.
I can easily resize an existing ASM disk via the ALTER DISKGROUP <disk_group> RESIZE <disk|failure_group|ALL> <size>;
command. ASM offers me the choice to resize all disks in the specified disk
group, all disks in the specified failure group, or just the specified
disk. Listing 3.5
shows examples of these three options.
Verifying ASM Disk Groups
Keeping my ASM disk groups in ship-shape condition should be
one of my primary concerns. I can issue the ALTER DISKGROUP <disk group name> CHECK [NOREPAIR];
command to verify internal consistency of ASM disk group metadata and repair
any possible errors. This command checks each file, each disk, or all
components of the ASM disk group for any errors and will attempt to repair
them. Specifying the NOREPAIR
directive alerts me to any errors and bypasses any repairs; all errors will
still be reported in the ASM instance's alert log. Listing 3.6 shows
an example of ASM disk group verification via this command.
Manually Rebalancing a Disk Group
Even though ASM will automatically rebalance a disk group
when any operation other than a MOUNT, DISMOUNT, or CHECK has been issued, I
also have the option to initiate a disk rebalancing manually. Issuing the ALTER DISKGROUP <disk group
name> REBALANCE [POWER n]; command requests
ASM to perform rebalancing on the specified ASM disk group, provided
that any rebalancing is needed.
If the optional POWER directive is specified, ASM will rebalance
the disk group using the integer value to override the value that the ASM_POWER_LIMIT
initialization parameter has set forth. I can thus override the speed of
a potential rebalancing or change the power level of an ongoing
rebalance operation by changing the POWER directive's value. In addition, if I set POWER to zero
(0), any rebalancing on the disk group is halted until I directly or implicitly
invoke the rebalancing operation again. See Listing 3.7 for an
example of manual rebalancing a selected ASM disk group.
Mounting and Dismounting ASM Disk Groups
Dismounting a mounted ASM disk group is performed via the ALTER DISKGROUP <disk group
name> DISMOUNT; command. Its reciprocal is the MOUNT
directive. See Listing
3.8 for examples of both of these commands.
Removing Disks from a Disk Group
I can issue the ALTER DISKGROUP <disk group name> DROP DISK <disk
to drop a single ASM disk from an existing ASM disk group. And I can
also "undrop" (i.e. cancel the pending drop) of any ASM disks that
were previously assigned to ASM disk group(s) with the ALTER DISKGROUP <disk group name | ALL> UNDROP DISKS; command.
However, note that once a disk drop operation completes successfully, issuing UNDROP will
not recover the disks. See Listing
3.9 for examples of these two commands.
Dropping a Disk Group
Finally, I can drop an entire existing ASM disk group
along with all of its files - via the DROP DISKGROUP command. If the disk group does
still contain any files besides internal ASM metadata, then I have to specify
the INCLUDING CONTENTS
option as well. Also, the disk group to be dropped must be mounted. ASM will
first validate that none of the disk group files are open, and then remove all
of the assigned ASM disks and the disk group itself from the ASM instance. ASM
will also overwrite the header of each previously used ASM disk to remove any
remaining ASM formatting information. Listing 3.10 shows
an example of this command.
Migrating a Database to ASM Storage
As discussed in the prior article in this series, ASM files cannot
be accessed through normal operating system interfaces. Therefore, RMAN is
the only means for copying files from regular storage to ASM, and is the only
tool a DBA can use to migrate an entire database from regular to ASM storage.
Oracle recommends the following process for a full database migration:
Step 1. Take a complete backup of the database before
proceeding with any migration efforts. This will provide a relatively safe
fallback position in case any of the migration steps outlined below should
Step 2. Gather the file names of the current control
files and online redo logs using V$CONTROLFILE and V$LOGFILE.
Step 3. Shut down the database in consistent mode
using either SHUTDOWN IMMEDIATE,
or SHUTDOWN NORMAL.
This is a crucial step.
Step 4. Change the database's server parameter file:
Set the necessary OMF destination parameters to the desired ASM
Remove the CONTROL_FILES
Step 5. Restore a control file from backup. This will
migrate the control files to ASM storage automatically. Note that if an OMF
control file is created but there is a server parameter file, then a CONTROL_FILES
initialization parameter entry will also be created in the server parameter
Step 6. Issue ALTER DATABASE MOUNT; to mount the database.
Step 7. Edit and run an RMAN command file that:
Backs up the database
Switches the current data files to the backups
Renames the online redo logs. You can move only tablespaces or
data files using the BACKUP AS COPY
Step 8. Once the DBA is certain that the migration
has been successful, she can delete the original database files left
behind in the "regular" storage directories.
An example RMAN command script to accomplish Step 7 is shown
in Listing 3.11.
While I have endeavored to introduce practical examples
of how to employ Automatic Storage Management (ASM) within multiple Oracle 10g
environments, I have truly just scratched the scratch on the surface of what
ASM can accomplish. ASM places extremely powerful file management tools into
the hands of every Oracle DBA, and it is definitely worth the effort to explore
its features no matter the size of the database environment.
References and Additional Reading
While there is no substitute for direct experience, reading
the manual is not a bad idea, either. I have drawn upon the following Oracle
10g documentation for the deeper technical details of this article:
B10739-01 Oracle Database
Administrator's Guide, Chapter 12
B10743-01 Oracle Database
Concepts, Chapter 14
B10755-01 Oracle Database
B10759-01 Oracle Database
See All Articles by Columnist Jim Czuprynski