Increase DSS Performance by 100x and OLTP by 2x: Switch to Oracle 12c

By Natik Ameen

With the release of Oracle 12c, in September 2013, 500 plus new database features and enhancements were introduced to address specific problems customers were facing in their databases environments. This month, Larry Ellison, introduced one of the largest performance enhancing features, the Oracle 12c In-memory database option. This has dramatically changed the database management industry by increasing DSS performance by 100x and OLTP by 2x.

Traditionally data has been stored in the row format in the databases. There is however an alternate scheme available to store the data in a columnar format. Most relational databases however use the row format. For faster query processing though, (specifically in data warehouse databases) data stored in columnar format performs significantly faster. The later scheme is also being used in Big Data database solutions. The benefit of storing the data in columnar format is that in analytical and Big Data databases, you can get the aggregate results much faster as data is stored the way it’s meant to be retrieved. I’ll explain more later.

Oracle has also always stored data in the traditional row format. However to enhance the performance and address the requirements of processing growing data sets, it has added the in-memory option in 12c. Once you enable this feature, your data is available in the SGA in both the row and the columnar format. This approach to have the data available in both formats is very different to how other competitors have implemented their solutions, utilizing columnar data format. By allowing both these formats to co-exist in the SGA, the access to the data is good for both OLTP as well as for DSS workloads. Before being implemented in Oracle, this feature was not available in any database solution, the way Oracle has implemented it.

When a table is configured to use this in-memory database option, the data in the columns are stored using compression and deduplication. This mechanism allows for increased data density in the individual blocks, helping reduce the amount of blocks Oracle would have to visit, to fulfill a query requirement. This is the secret behind being able to speed up the performance by up to 100x in Oracle 12c.

So What Motivated Oracle to Come up with This Option?

Faster Analytical Queries

The first design goal was obviously to improve the query performance. This provides 100x faster query processing when you have the in-memory database option enabled for the table.

Faster OLTP

This goal is contradictory to the first goal. Having an extra copy of a table stored in memory means that there is extra work required for inserts, updates and deletes. With this feature, performances for OLTP increased by 2X.

So Then How Does OLTP Perform 2x Faster?

Most of the tables in a typical relational database have a number of indexes on them. The reason that causes the OLTP performance to go down is the overhead of maintaining these indexes. The DML performance in OLTP is bogged down due to having to update these index structures. According to Oracle, research has shown that there are very few columns which are actually required for OLTP operations. Most of the indexes are there to support the analytical query requirements.

When the tables are marked to be stored in the in-memory cache, indexes which support analytical queries can be dropped. This will result in a 60-80% reduction in indexes and hence faster OLTP operations as compared to the typical database configuration.

Real Time Analytical Queries

You can now effectively architect your OLTP database to provide real time analytical data directly from the OLTP database. They can in fact co-exist more than ever before without impacting each other. No indexes are required for these queries so there is no additional overhead.

Also previously, prior to writing analytical queries, the user had to make sure that indexes exist on the required columns. Now, with the data in columnar format in memory this is no longer required. This provides major flexibility , allowing you to slice and dice the data any way you want, without having to consider whether an analytical index exists on the column or not.  That is a major achievement!

Transparency

The third and final major design goal was to have 100% transparency. The existing end user should not have to make any changes to their application. Since data is still physically being stored in the row format, there is no code change required at all!

Everything just works“, says Larry Ellison.

Does This Require Any Application Changes?

Absolutely no changes are required to your existing applications to use this feature. Also there is no limit on how you access the in-memory data. You can run just about anything which you used to [run] against a typical Oracle database.

And More…

The best part is that your data will be as safe as it has been. It will have the same high availability and sociability features. All other features are fully available and compatible with this.

The only notable requirement is the increase in the requirement of RAM for the columnar storage in the SGA, on the Database host. In support of this, firstly RAM is getting cheaper, much like other hardware components of your data center. Also Oracle does not store all of your in-memory in RAM. It only stores the most active data. If certain parts of the data are inactive, they are removed from the In-Memory Cache.

Since compression is used, you can actually store more data than the amount of memory available. This in turn allows more data to be stored in the individual blocks, and hence higher likelihood that the required data will be found in the blocks already in memory.

Enabling In-memory

Enough hype, let’s get to business. How do you actually enable this?

All you need to do is to set the inmemory_size parameter to the amount of memory you want to allocate for the column cache usage. When a transaction type is OLTP, it will automatically access the row format data and if it is DSS type operation, the In-memory cache will be accessed.

alter system set inmemory_size=2G;

Second, enable the in-memory option for tables you want to cache in.

alter table t1 inmemory;

Third is optional but very important. Drop the analytical indexes on the table!

So as we can see, apart from the other new features in Oracle 12c, the in-memory option by itself is a major improvement, enhancing DSS performance and OLTP greatly while keeping the solution simple and secure. Not having to make any changes in the application and being able to “turn it on” is truly a great achievement.

So are you ready to use Oracle 12c?

Author’s Bio

Natik Ameen is one of the VST’s Internal Oracle Experts, and Project Manager for the Free Oracle 12c Course of VST. He is an Oracle Production DBA, Certified RAC Expert (OCE), and a DBA track Certification trainer for over 15 years. He is an Oracle Evangelist and has presented at IOUG & UTOUG conferences on topics such as RAC, DataGuard and GoldenGate 12c. He is also an active blogger and published over 150 articles on Oracle Database 12c and Oracle GoldenGate.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles