Database Journal
MS SQL Oracle DB2 Access MySQL PostgreSQL Sybase PHP SQL Etc SQL Scripts & Samples Tips Database Forum Rss Feed

» Database Journal Home
» Database Articles
» Database Tutorials
MS Access
SQL Scripts & Samples
» Database Forum
» Slideshows
Free Newsletters:

News Via RSS Feed

Rss Feed

Database Journal |DBA Support |SQLCourse |SQLCourse2

Featured Database Articles


Posted December 19, 2013

In-Memory OLTP Samples for SQL Server 2014

By Greg Larsen

On November 4th, Microsoft released a new set of samples targeted at In-Memory OTLP tables in SQL Server 2014.   In this article I will be exploring and explaining using these new samples.  If you haven’t yet looked into the new In-Memory OLTP feature in SQL Server 2014 CTP2 then possibly you should consider downloading SQL Server 2014 CTP2 and these samples to better understand this new performance related feature that is coming out in the next release of SQL Server.

How You Can Get the New Samples, and Where You Should Start?

I’m sure you all want to know first and foremost where you can get these new samples.  They are available on CodePlex at the following location:  http://msftdbprodsamples.codeplex.com/releases/view/114491

When you download the samples from this location there will be a 46K zip file that contains a Word document and a SQLCMD script. The word document explains how to install and use the samples and the SQLCMD script upgrades the AdventureWorks2012 databases to use In-Memory tables, and then creates some sample tables and stored procedures.  

You should first start by reading the Word document, which is named “aw_inmem_sample.docx”.   This document contains the following sections:

  • Prerequisites
  • Installing The AdventureWorks In-Memory OLTP Samples
  • Structure Of the Samples
  • Performance Measurements using Ostress

My Testing Environment

Prior to walking though installing the new In-Memory samples let me explain the environment I will be using for installing and testing the samples.  I will be using my laptop that has 8 GB of memory and an Intel I7 Q740 processor, with a single disk drive spinning at 7200 RPM.  On this machine I created a VMware virtual machine.  The virtual machine has been set up with two virtual hard drives, where one is used for the C drive and the other for the E drive.  I have also allocated 4 GB of memory and 4 processors to this virtual machine to get this sample to work.  When installing the AdventureWorks2012 sample database I placed the DATA file on the C drive and the LOG file on the E drive.

Walk-Through of the Installing the Samples

In this section I will explain my experience as I installed these samples.

First I started by reading the Word document.  In the “Prerequisites” section it clearly stated I needed to have SQL Server 2014 CTP2 to run these samples.  Therefore I created a fresh VMWare guest on my laptop which used Windows Server 2012 R2.  This document recommended that I set up a machine the same size as my production machine.  Clearly I couldn’t do that on my laptop, so I set up a VM using the default memory limit of 2 GB first.  This turned out to not to be enough memory to handle the samples.  Remember the In-Memory OLTP tables take memory, so sizing the memory of your server will need to consider how many tables you will be planning on storing in memory.   If you plan to run these samples I would suggest your machine have at least has 4 GB of memory to start with.  I found if I ran the performance scripts too many times I even ran out of memory using the 4 GB limit.  Therefore if you plan to load millions of records using the samples they provided you probably should consider having more than 4 GB of memory.

Next I installed the AdventureWorks2012 database and upgrade it using the SQLCMD script provided in the zip file.  I used the link in the documentation to successfully create the AdventureWorks2012 database on my SQL Server 2014 CTP2 instance from the link provided in the Word document.   Make sure you follow the instructions when creating your AdventureWorks2012 database from the backup.  Especially the step to set the database owner to a login on your SQL Server 2012 instance.  I didn’t do this at first and found the SQLCMD script failed.  I set my database owner to SA.   Once I had my AdventureWorks2012 database all set up correctly I then ran the SQLCMD script.  This script modifies the AdventureWorks2012 database to support In-Memory tables and then creates a number of empty In-Memory OLTP tables, and natively compiled stored procedure.   This is where I originally had problems running the performance stress test script when my VM memory limit was 2 GB. 

By reading the documentation about the structure of the samples you will find that new tables and stored procedures will be created in the Production and Sales schemas, and a Demo schema will also be created.  These new tables and stored procedures are very similar to existing AdventureWorks2012 objects, but have been tweaked to show off the In-Memory OTLP functionality in SQL Server 2014 CTP2.  The Demo schema created is used to support the stress test process that will drive inserting and update records in the new objects to show off the performance benefits of In-Memory tables and natively compiled stored procedure.

Exploring the New In-Memory Tables Created

When the SQLCMD script is run the following tables are created:


The objects that end in “_inmem” identify the In-Memory OLTP tables created by the SQLCMD script.  For each one of these In-Memory OLTP tables there is a companion disk based table that was created that ends in “_ondisk”.  This SQLCMD script created a set of In-Memory OLTP and disk based tables so that when running the stress tests you can compare results against each type of table that is similar. The last table created,  Demo.DemoSalesOrderDetailSeed, is used to support the stress test.

Let’s compare a set of these tables to identify how the CREATE TABLE statements differ for In-Memory OLTP table verses a disk based table.  Here are the two different CREATE TABLE statements:

In-Memory Table:

CREATE TABLE [Sales].[SalesOrderDetail_inmem](
       [SalesOrderID] uniqueidentifier NOT NULL INDEX IX_SalesOrderID HASH WITH (BUCKET_COUNT=1000000),
       [SalesOrderDetailID] [int] NOT NULL,
       [CarrierTrackingNumber] [nvarchar](25) NULL,
       [OrderQty] [smallint] NOT NULL,
       [ProductID] [int] NOT NULL INDEX IX_ProductID HASH WITH (BUCKET_COUNT=10000000),
       [SpecialOfferID] [int] NOT NULL,
       [UnitPrice] [money] NOT NULL,
       [UnitPriceDiscount] [money] NOT NULL CONSTRAINT [IMDF_SalesOrderDetail_UnitPriceDiscount]  DEFAULT ((0.0)),
       [ModifiedDate] [datetime2] NOT NULL ,

Disk Based Table:

CREATE TABLE [Sales].[SalesOrderDetail_ondisk](
       [SalesOrderID] uniqueidentifier NOT NULL,
       [SalesOrderDetailID] [int] NOT NULL,
       [CarrierTrackingNumber] [nvarchar](25) NULL,
       [OrderQty] [smallint] NOT NULL,
       [ProductID] [int] NOT NULL INDEX IX_ProductID NONCLUSTERED,
       [SpecialOfferID] [int] NOT NULL,
       [UnitPrice] [money] NOT NULL,
       [UnitPriceDiscount] [money] NOT NULL CONSTRAINT [ODDF_SalesOrderDetail_UnitPriceDiscount]  DEFAULT ((0.0)),
       [ModifiedDate] [datetime2] NOT NULL ,
 CONSTRAINT [ODPK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID] PRIMARY KEY  

Here are the differences you should note regarding these two CREATE TABLE statements:

  • There are three different HASH indexes on the _inmem table, which are similar to the indexes created on the _onDisk table.  Note the BucketCount on each index, and how they are not all the same.  To find out more about the BucketCount values and how it should be sized read my article titled “Introduction to SQL Server 2014 CTP1 Memory-Optimized Tables”.
  • The _inmem table has a “WITH (MEMORY_OPTIMIZED=ON)” clause to identify it is to be created as an In-Memory OLTP table.

Other than those two differences the table statements are the same.  This demonstrates that changing a disk based table to an In-Memory OTLP table doesn’t take a lot of DDL over a disk based table.  Keep in mind that an In-Memory Tables can’t have FOREIGN KEY, CHECK constraints, or COMPUTED__COLUMNs, which were not demonstrated in the two CREATE TABLE statements.

If you explore the SQLCMD you find it creates the following stored procedures:


Note there are a number of different stored procedures with similar names.  Those that end with “_ondisk” are those stored procedures that reference on disk tables.  The ones that end in “_inmem” reference In-Memory OLTP tables.  The one that ends in “_native” is a natively compiled stored procedure.  By having these similar stored procedures the sample stress test will be able to demonstrate the performance difference of using disk based table’s verses In-Memory OLTP tables using a natively compiled stored procedure.  All of the stored procedures created in the Demo schema are used for driving the script to populate the samples. 

Running the Performance Tests for INSERTs Only

To prove out how much better the In-Memory OTLP tables outperform the disk based tables I will need to run a stress test.  This test will need to INSERT a number of records into the In-Memory and disk based tables and then record the elapsed time it takes to run these tests.  In order to do this I will use the OSTRESS utility. 

The OSTRESS utility is a command line utility that Microsoft has produced that is used to stress out SQL Server.  This utility will take a SQL Server command and then run it against a database multiple times concurrently.  By using this tool I can simulate many users performing the same operations.  To download this tool you can use the following link:


For my test I will be running the sample stored procedure Demo.usp_DemoInsertSalesOrders.  This stored procedure accepts the following three parameters: @use_inmem, @order_count and @include_update.   The @use_inmem option identifies whether or not the stored procedure should insert rows into the In-Memory OLTP or disk based tables.  By specifying a 1 it will insert rows in to the In-Memory OLTP tables, by passing a 0 in this parm it will insert rows into the disk based tables.  The @order_count parameter identifies the number of SalesOrderHeader rows to insert.  The final parameter @include_update determines whether only INSERT statements should be run, or INSERT and UPDATE statements should be run.  By passing a 1 an INSERT and UPDATE statements will be performed, but if you pass a 0 only INSERT statements will be performed. 

For my test I’m going to INSERT 999 rows, and have 100 concurrent threads to be run. I will run this test using OSTRESS from the RML command prompt.   To do this I start the RML utility by entering “RML” on the Start menu. This will bring up the RML command prompt.  From the RML prompt I run the following command to start the stress test of loading rows into the In-Memory OLTP sample tables:  

ostress.exe –S. -E -dAdventureWorks2012 -Q"EXEC Demo.usp_DemoInsertSalesOrders @use_inmem = 1, @order_count=1000, @include_update=0" –n100

This command executes the stored procedure usp_DemoInsertSalesOrders simultaneously in 100 different threads, where each execution loads 999 SalesOrderHeader rows, with their associated SalesOrderDetail rows.  Note that even though I specified that the @order_count parameters to be 1000 the usp_DemoInsertSalesOrder stored procedure I downloaded started its indexing at 1 causing the stored procedure to only INSERT 999 rows. 

Below is a Performance Monitor graph that captured two performance counters while I ran the above OSTRESS command:

Performance Monitor graph
Performance Monitor graph

The two counters where “% Processor Time” and “Latch Waits/Sec”.  The % Processor Time is shown in red, whereas the Latch Waits/Sec” are highlighted, so that counter is shown in black.

Below is the last of output from the OSTRESS command that I ran:

Output from the OSTRESS command
Output from the OSTRESS command

This output shows that my elapsed time for this first test using In-Memory OLTP tables only took 11.560 seconds. Now I’m going to run the test with a disk based table.

To run the OSTRESS test for the disk based tables I’m going to run the following command:

ostress.exe –S. -E -dAdventureWorks2012 -Q"EXEC Demo.usp_DemoInsertSalesOrders @use_inmem = 0, @order_count=1000, @include_update=0" –n100

Basically the only change I made to this command was to change the @use_inmem parm from 1 to 0. 

Below is the Performance Monitor graph that was created during the time that my second test was run:

Performance Monitor graph
Performance Monitor graph

By reviewing this graph and comparing the results you can note a couple of differences.  First the CPU of the disk based test ran around 50%, while the In-Memory test consumed more than 80% CPU during the test.  Secondly the disk based test pegged the Latch Waits/sec to 100% for the duration of the test, whereas the In-Memory OLTP test has no Latch Waits.  Remember In-Memory OLTP tables are latch and lock free.

Here is the last line of the OSTRESS run of the disk based test.

Last line of the OSTRESS
Last line of the OSTRESS

Here you can see the disk based test ran for 1 minute and 3.841 seconds.  Therefore the In-Memory OLTP tables for my machine and my test provided a 5.52X improvement over the disk based tables.

One thing I found out about my test environment was I had to be careful that I didn’t run out of memory.  I found that if I ran the above In-Memory test too many times, or picked a really large number for the @Order_Count parameter that I could run my VMware virtual machine out of memory, and the OSTRESS test would start getting errors.

Microsoft provided the Demo.usp_DemoReset stored procedure to reset the database tables.  This stored procedure removes the rows from the In-Memory and disk based SalesOrderHeader and SalesOrderDetail tables.  If you want to run these tests multiple times then I suggest that you run this stored procedure between each set of tests.

Testing Out In-Memory tables

The sample scripts and database that Microsoft provided is a good starting point for looking at the performance benefits of In-Memory tables.   I didn’t explore using the update parameter in this article, but I would suggest you consider running your own test using different permutations of these test.  By doing this you will be able to measure the performance different between using In-Memory and disk based tables.  If you have been wanting a method of testing out In-Memory tables, and benchmark their performance against disk based tables then I would suggest downloading the CodePlex samples identified above, and running a few tests. 

See all articles by Greg Larsen

MS SQL Archives

Latest Forum Threads
MS SQL Forum
Topic By Replies Updated
SQL 2005: SSIS: Error using SQL Server credentials poverty 3 August 17th, 07:43 AM
Need help changing table contents nkawtg 1 August 17th, 03:02 AM
SQL Server Memory confifuration bhosalenarayan 2 August 14th, 05:33 AM
SQL Server Primary Key and a Unique Key katty.jonh 2 July 25th, 10:36 AM