Oracle Database 11g R2 RAC – The top 5 changes


The introduction of the Oracle Grid Infrastructure product essentially combines Automatic Storage Management (ASM) and Oracle Clusterware into the same oracle home and product. With this new configuration, five main changes have been incorporated into Oracle Database 11g RAC. Read on to learn more.

With the
release of Oracle Database 11g R2 a fundamental change was made that impacts a Real
Application Cluster
(RAC) based system. The cause of these changes is the
introduction of the Oracle Grid Infrastructure product, which essentially
combines Automatic Storage Management (ASM) and Oracle Clusterware into the
same oracle home and product.

ASM was
previously incorporated into the Oracle Database home and binaries, and Oracle
Clusterware was a stand-alone product installed with its own home and
binaries.

With this new
configuration, five main changes have been incorporated into Oracle Database
11g RAC. These changes include installing and working with Grid
Infrastructure, which includes setting up and configuring ASM; Single
Client Access Names (SCAN)
, RAC
One Node
, Automatic Workload Balancing and the ASM Cluster File System
(ACFS).

Single Client Access Name (SCAN)

SCAN
simplifies the activities involved in adding and removing nodes from a RAC environment
by configuring a single connection setting for all clients to use when
connecting to a RAC database. Connections use EZconnect methodology to connect
to the database regardless of which host machines the instances are currently

running on.

For more
information on Single Client Access Names you can view my article on SCAN by
clicking here Single Client Access Name

RAC One Node

RAC One Node is
a feature available with Oracle 11gR2 that essentially allows you to run a
single instance database in a cluster and take advantage of the high
availability and failover capabilities of Oracle RAC.

The database
needs to be built as a "RAC" database, however, rather than selecting
multiple nodes when running DBCA, the DBA would select to have only one node
running one instance for the database.

If anything
happens, the single instance can be migrated over to another node on the
existing cluster. Even sessions can be automatically failed over to the new
instance if Transparent Application Failover has been configured.

On Linux, a
patch may need to be applied to the Oracle home prior to building the
database. (RACONENODE_p9004119_112010_LINUX.zip).
For more information on managing a RAC One Node Database, see Administering
Oracle RAC One Node
.

Oracle Grid Infrastructure

The Oracle
Grid Infrastructure is a separate set of binaries from the Oracle Database
software. It incorporates volume management functionality, file system
functionality and the cluster management software. Essentially, Oracle Grid
Infrastructure combines three products into one single Oracle Home. These three
pieces are Oracle Clusterware, Automatic Storage Management (ASM) and the ASM
Cluster File System (ACFS). In previous releases, Oracle Clusterware and ASM
were installed into separate Oracle homes. ASM was included as part of the
Oracle Database binaries, and Oracle Clusterware was a separate install. ACFS
is new in Oracle 11g R2.

The
Clusterware component of the Grid Infrastructure includes Cluster Management
Services (known as Cluster Ready Services (CRS) in previous releases) and High
Availability Services for both Oracle and other third party products.
Installing Oracle Clusterware is a prerequisite activity for working with
Oracle RAC.

Starting with
Oracle 11g R2, ASM disks can be used for the clusterware files (OCR and voting
disk). In previous releases, the OCR and Voting disk files had to be on raw
partitions or a separate cluster file system.

ACFS is a
general cluster file system that can be used for any type of data files.

The
combination of these three components of the Oracle Grid Infrastructure now
provides the primary foundation for Oracle 11g R2 RAC databases.

The six
primary functions of Oracle Clusterware are Cluster Management, Node
Monitoring, Event Services, Network Management, Time Synchronization and High
Availability.

Cluster
Management allows the cluster resources (such as databases, instances,
listeners, services, etc.) to be monitored and managed from any node that is
part of the defined cluster.

Node
Monitoring is the "heartbeat" check of the nodes (and the resources
running on them) to make sure they are available and functioning properly.

Event
Services is the high availability messaging and response functionality for RAC.

Network
Management involves managing the Virtual IP (VIP) addresses that are associated
with the cluster and provides the consistent access to the RAC database
regardless of the systems hosting it.

Time
Synchronization is a new feature in Oracle 11g R2 that automatically
synchronizes the timestamps of all of the nodes in the cluster. In previous releases,
third party tools were generally used. Time Synchronization can be used in
observer mode (if a Network Time Protocol is already in place) or active mode
where one node is designated as the master node and all of the others are
synchronized to it.

High
Availability services monitor and restart any of the resources being managed by
Oracle Clusterware.

For
additional information about installing the Oracle Grid Infrastructure on Linux,
see the Oracle
Grid Infrastructure Installation Guide
,

Automatic Workload Management (Policy Managed Databases)

In prior
releases of Oracle RAC, the DBA would explicitly manage which nodes of a
cluster would be used to run the various instances associated with a RAC
database. Additionally, database services would be manually configured with
preferred and available nodes, which facilitated the balancing of connections
and failover of the services in a RAC environment.

In Oracle
Database 11 g R2, a DBA can now configure a feature called policy-based
management. This involves defining a server pool with the options of a minimum
number of servers, a maximum number of servers and an importance level. The
database itself would then be associated with a server pool rather than a
specific set of nodes. This would allow Oracle to dynamically deliver services
based on the total resources available to the cluster.

For example,
if a cluster consisted of eight nodes in total and supported three RAC
databases. Each database would be defined with a minimum and maximum number of
servers. Let’s assume that DB1 is defined with a minimum of 4 and a maximum of
6 with an importance of 10, DB2 is defined with a minimum of 2 and maximum of 3
and an importance of 7, and DB3 is set to a minimum of 2 and maximum of 3 with
an importance of 5.

Initially the
8 nodes could be configured as nodes 1-4 allocated to DB1, nodes 5-6 allocated
to DB2 and nodes 7-8 allocated to DB3. If node 3 failed for some reason, the
system would allocate node 7 or 8 to DB1 because it has a higher importance
than DB3 and a minimum requirement of 4 servers, even though it would cause DB3
to fall below the minimum number of servers. If node 3 is re-enabled it would
be allocated immediately to DB3 to bring that database back to its minimum
required servers.

If a 9th node
were added to the cluster, it would be assigned to DB1 because of the
importance factor and the fact that DB1 has not yet hit its maximum number of
servers.

For more
information on policy-managed databases, see the Oracle
Real Application Clusters Administration and Deployment Guide
.

ASM Cluster File System (ACFS)

ACFS is a
multi platform cluster file system that has been designed to run on any
platform that is supported by ASM rather than being specific to any one
platform as some of the third party products are.

ACFS uses a
file type called a dynamic volume, which can be used as a volume for a regular
file system. The ASM Dynamic Volume Manager provides the interface between the
dynamic volumes and the ACFS.

ACFS also
provides a snapshot utility that does version enabling of the file system.
These snapshots are online point-in-time copies of the ACFS. The storage for
the snapshots is managed within ACFS and is very space efficient. Before any
file is modified in ACFS, a copy is saved as a snapshot. The snapshot copies
are designed to only capture the changed data. Snapshots can also be created
on demand to provide consistent views of the ACFS system.

While ACFS
can be used to store most files associated with Oracle, it must be noted that
the Oracle Grid Infrastructure binaries themselves cannot be stored in ACFS.
They must be installed locally on each node in the cluster.

All of the
necessary services to manage ACFS systems are automatically managed by Oracle
Clusterware. See the Oracle
Automatic Storage Management Administrator’s Guide
for more information
about ACFS.

Conclusion

Overall,
since first releasing Real Application Clusters back with Oracle Database 9i it
has been their premier high availability solution for databases. They have
continued to add functionality and improvements over the Oracle releases since
that date, and Oracle 11G R2 is no exception.

»


See All Articles by Columnist

Karen Reliford

Karen Reliford
Karen Reliford
Karen Reliford is an IT professional who has been in the industry for over 25 years. Karen's experience ranges from programming, to database administration, to Information Systems Auditing, to consulting and now primarily to sharing her knowledge as an Oracle Certified Instructor in the Oracle University Partner Network. Karen currently works for TransAmerica Training Management, one of the foremost Oracle Authorized Education Centers (OAEC) in the Oracle University North America region. TransAmerica Training Management offers official Oracle and Peoplesoft Training in Coral Gables FL, Fayetteville AR, Albuquerque NM, Providence RI and San Juan PR. Karen has now been teaching Oracle for Oracle University for more than 15 years. Karen has attained her Certified Technical Trainer designation along with several Oracle certifications including OCP-DBA, OCP-Internet Developer, Oracle Expert - Oracle 10g RAC and Oracle Expert - Oracle Application Express (3.2). Additionally, Karen achieved her Oracle 10g Oracle Certified Master (OCM) in 2008. Karen was raised in Canada, and in November 2009 became a US Citizen. Karen resides in Columbus OH with her husband, Ron along with their 20 pets, affectionately referred to as the "Reliford Zoo".

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles