Oracle RAC Administration – Part 14: Services Architecture in Workload Management

Brief intro

There is a lot of hype on Virtualization. It is a $20
Billion market and although heavy transaction intensive applications and Oracle
RAC may not find its place in your Virtual Infrastructure today, someday they
surely will. Disk Virtualization, Network Virtualization, not only address the
increasingly crucial issues of cost but also taking a harder look at
manageability. Oracle RAC and Oracle installations have always been big deals.
If you do a search on Oracle’s site, you’ll find installation documents all
over the place.

Performance is where the meat is. Even in the virtualization
arena, there are some developments. There is a committee (A Non Profit
Organization) called SPEC that is working on a model for benchmarks. VMware
will be introducing its VMmark soon. Someday we will have a globally deployable
Oracle RAC on a Virtual Infrastructure, like the screenshot on my
blogpost
!

In this article, we will focus on the workload management.
We will start our discussion and I hope to have my 4-node RAC ready on the
Enterprise Linux provided by Oracle on VMware’s ESX 3.0.1 server (latest
version) with enough memory and 4 vCPUs!

Introduction to Workload Management

Workload management helps us
manage and distribute our workload to provide us with peak performance and high
availability. So what all do we have in the Workload management?

  • Services : If you remember the
    Services Architecture, we created our service above our physical nodes to
    provide not only transparency but it also masks the individual nodes to give us
    a typical grid. This way we can group our RAC databases into separate entities.
    We will also go ahead and create more entities (services) to address the needs
    of our applications, such as OE, HR, Sales etc.

  • Connection Load Balancing :
    This feature of ONS (Oracle Net Services, if you remember when you provide the
    command to list all the processes , crsstat –t or plainly crsstat to get a
    detailed description of all the processes) helps the connections get adequately
    load balanced across the RAC.

  • High Availability : A RAC component that ensures that
    the cluster is online all the time.

  • Fast Application Notification (FAN)
    : This notifier alerts applications continuously on any changes in the
    configurations and workload services.

  • Load Balancing Advisory : As the name suggests, this provides
    the necessary information to applications regarding the service levels provided
    by nodes. It advises the applications to make the requests at the best available
    services by a given node depending on the policy defined by you (the DBA).

  • Fast Connection Failover : This
    facilitates the Clients to rapidly failover to available nodes by being
    adequately informed by FAN.

  • Runtime Connection Load Balancing
    : Clients need to provide available connections in the Connection Pool in order
    to complete a transaction, thus the name runtime. The continuity or the High
    Availability is the core of the whole RAC operation and this intelligent
    mechanism ensures that the operations are completed when started.

Let’s take a look at the above mentioned components.

Services

Although the Services is the core of the Service Architecture of RAC, and
can be an extended topic, lets just briefly see what it is about and we will
see what we can do in the upcoming example articles on more practical aspects
of services.

Now when a user connects to a database, it is preferable to connect to the service
layer by mentioning the service in the connection string. As I mentioned above,
we can create more services to address and logically differentiate the needs of
the clients and applications without mingling into the nodes. Let’s see various
service level deployments.

1.     
Using Oracle Services

2.     
Default Service Connections

3.     
Connection Load Balancing

Oracle Services: This is a perfect way to manage applications or a subset of
applications. Simply, OLTP users, DWH/Batch Operations can have their own
services assigned to themselves. Service level
requirements should be the same for users /applications assigned to a service.
When defining a service, you have the opportunity to define which nodes will
support that service. They become preferred instances while the ones that
will provide failover support are known as available instances.

When you specify PREFERRED
instances, then a particular set of instances are brought in to assist and
support that service and the subset of applications. Should one or more of
these PREFERRED instances
fail, then the failover takes place and the services are moved over to the AVAILABLE instances. Should the failed
instances come back online the services will not fall back to the PREFERRED instances simply because it
has successfully met the service level requirement and is doing a fine job of
providing high availability. Thus, there is no need to enact another outage to
bring them back to the originally determined PREFERRED
instances. Do however note that you can easily fail them back to the original
situation by using FAN callouts.

Also, note that Resource profiles are
automatically created when you define a service. A resource profile takes care
of the manageability of the service and defines service dependencies for the
instance and database. Stopping a service would mean stopping the database and
the instances associated with the service, so use caution when you attempt to
bring down the services.

Services are integrated with Resource Manager thus
enabling us to restrict the resources that are being used by a service.
Consumer groups are mapped to the services so users connecting are members of
the specific consumer group. AWR (Automatic Workload Repository) helps us
monitor the performance per service.

ONS (Oracle Net Services) provides connection load
balancing. This can be simply done by setting the CLB goal in the listener, CLB_GOAL. It is also possible to specify
a single TAF policy for all users of a specific service by defining the FAILOVER_METHOD, FAILOVER_TYPE, etc.

Conclusion

In future articles we will continue what we’ve started here
and try to stay on course, describing the services deployment scenarios.

»


See All Articles by Columnist
Tarry Singh

Tarry Singh
Tarry Singh
I have been active in several industries since 1991. While working in the maritime industry I have worked for several Fortune 500 firms such as NYK, A.P. Møller-Mærsk Group. I made a career switch, emigrated, learned a new language and moved into the IT industry starting 2000. Since then I have been a Sr. DBA, (Technical) Project Manager, Sr. Consultant, Infrastructure Specialist (Clustering, Load Balancing, Networks, Databases) and (currently) Virtualization/Cloud Computing Expert and Global Sourcing in the IT industry. My deep understanding of multi-cultural issues (having worked across the globe) and international exposure has not only helped me successfully relaunch my career in a new industry but also helped me stay successful in what I do. I believe in "worknets" and "collective or swarm intelligence". As a trainer (technical as well as non-technical) I have trained staff both on national and international level. I am very devoted, perspicacious and hard working.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles