In-Memory OLTP Samples for SQL Server 2014

On November 4th, Microsoft released a new set of samples targeted at In-Memory OTLP tables in SQL Server 2014.   In this article I will be exploring and explaining using these new samples.  If you haven’t yet looked into the new In-Memory OLTP feature in SQL Server 2014 CTP2 then possibly you should consider downloading SQL Server 2014 CTP2 and these samples to better understand this new performance related feature that is coming out in the next release of SQL Server.

How You Can Get the New Samples, and Where You Should Start?

I’m sure you all want to know first and foremost where you can get these new samples.  They are available on CodePlex at the following location:  http://msftdbprodsamples.codeplex.com/releases/view/114491

When you download the samples from this location there will be a 46K zip file that contains a Word document and a SQLCMD script. The word document explains how to install and use the samples and the SQLCMD script upgrades the AdventureWorks2012 databases to use In-Memory tables, and then creates some sample tables and stored procedures.  

You should first start by reading the Word document, which is named “aw_inmem_sample.docx”.   This document contains the following sections:

  • Prerequisites
  • Installing The AdventureWorks In-Memory OLTP Samples
  • Structure Of the Samples
  • Performance Measurements using Ostress

My Testing Environment

Prior to walking though installing the new In-Memory samples let me explain the environment I will be using for installing and testing the samples.  I will be using my laptop that has 8 GB of memory and an Intel I7 Q740 processor, with a single disk drive spinning at 7200 RPM.  On this machine I created a VMware virtual machine.  The virtual machine has been set up with two virtual hard drives, where one is used for the C drive and the other for the E drive.  I have also allocated 4 GB of memory and 4 processors to this virtual machine to get this sample to work.  When installing the AdventureWorks2012 sample database I placed the DATA file on the C drive and the LOG file on the E drive.

Walk-Through of the Installing the Samples

In this section I will explain my experience as I installed these samples.

First I started by reading the Word document.  In the “Prerequisites” section it clearly stated I needed to have SQL Server 2014 CTP2 to run these samples.  Therefore I created a fresh VMWare guest on my laptop which used Windows Server 2012 R2.  This document recommended that I set up a machine the same size as my production machine.  Clearly I couldn’t do that on my laptop, so I set up a VM using the default memory limit of 2 GB first.  This turned out to not to be enough memory to handle the samples.  Remember the In-Memory OLTP tables take memory, so sizing the memory of your server will need to consider how many tables you will be planning on storing in memory.   If you plan to run these samples I would suggest your machine have at least has 4 GB of memory to start with.  I found if I ran the performance scripts too many times I even ran out of memory using the 4 GB limit.  Therefore if you plan to load millions of records using the samples they provided you probably should consider having more than 4 GB of memory.

Next I installed the AdventureWorks2012 database and upgrade it using the SQLCMD script provided in the zip file.  I used the link in the documentation to successfully create the AdventureWorks2012 database on my SQL Server 2014 CTP2 instance from the link provided in the Word document.   Make sure you follow the instructions when creating your AdventureWorks2012 database from the backup.  Especially the step to set the database owner to a login on your SQL Server 2012 instance.  I didn’t do this at first and found the SQLCMD script failed.  I set my database owner to SA.   Once I had my AdventureWorks2012 database all set up correctly I then ran the SQLCMD script.  This script modifies the AdventureWorks2012 database to support In-Memory tables and then creates a number of empty In-Memory OLTP tables, and natively compiled stored procedure.   This is where I originally had problems running the performance stress test script when my VM memory limit was 2 GB. 

By reading the documentation about the structure of the samples you will find that new tables and stored procedures will be created in the Production and Sales schemas, and a Demo schema will also be created.  These new tables and stored procedures are very similar to existing AdventureWorks2012 objects, but have been tweaked to show off the In-Memory OTLP functionality in SQL Server 2014 CTP2.  The Demo schema created is used to support the stress test process that will drive inserting and update records in the new objects to show off the performance benefits of In-Memory tables and natively compiled stored procedure.

Exploring the New In-Memory Tables Created

When the SQLCMD script is run the following tables are created:

Sales.SalesOrderHeader_inmem
Sales.SalesOrderDetail_inmem
Sales.ShoppingCartItem_inmem
Sales.SpecialOffer_inmem
Sales.SpecialOfferProduct_inmem
Production.Product_inmem
Sales.SalesOrderHeader_ondisk
Sales.SalesOrderDetail_ondisk
Sales.ShoppingCartItem_ondisk
Sales.SpecialOffer_ondisk
Production.Product_ondisk
Demo.DemoSalesOrderDetailSeed

The objects that end in “_inmem” identify the In-Memory OLTP tables created by the SQLCMD script.  For each one of these In-Memory OLTP tables there is a companion disk based table that was created that ends in “_ondisk”.  This SQLCMD script created a set of In-Memory OLTP and disk based tables so that when running the stress tests you can compare results against each type of table that is similar. The last table created,  Demo.DemoSalesOrderDetailSeed, is used to support the stress test.

Let’s compare a set of these tables to identify how the CREATE TABLE statements differ for In-Memory OLTP table verses a disk based table.  Here are the two different CREATE TABLE statements:

In-Memory Table:

CREATE TABLE [Sales].[SalesOrderDetail_inmem](
       [SalesOrderID] uniqueidentifier NOT NULL INDEX IX_SalesOrderID HASH WITH (BUCKET_COUNT=1000000),
       [SalesOrderDetailID] [int] NOT NULL,
       [CarrierTrackingNumber] [nvarchar](25) NULL,
       [OrderQty] [smallint] NOT NULL,
       [ProductID] [int] NOT NULL INDEX IX_ProductID HASH WITH (BUCKET_COUNT=10000000),
       [SpecialOfferID] [int] NOT NULL,
       [UnitPrice] [money] NOT NULL,
       [UnitPriceDiscount] [money] NOT NULL CONSTRAINT [IMDF_SalesOrderDetail_UnitPriceDiscount]  DEFAULT ((0.0)),
       [ModifiedDate] [datetime2] NOT NULL ,
 
 CONSTRAINT [imPK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID] PRIMARY KEY NONCLUSTERED HASH 
(
       [SalesOrderID],
       [SalesOrderDetailID]
)WITH (BUCKET_COUNT=10000000)
) WITH (MEMORY_OPTIMIZED=ON)
 
 

Disk Based Table:

CREATE TABLE [Sales].[SalesOrderDetail_ondisk](
       [SalesOrderID] uniqueidentifier NOT NULL,
       [SalesOrderDetailID] [int] NOT NULL,
       [CarrierTrackingNumber] [nvarchar](25) NULL,
       [OrderQty] [smallint] NOT NULL,
       [ProductID] [int] NOT NULL INDEX IX_ProductID NONCLUSTERED,
       [SpecialOfferID] [int] NOT NULL,
       [UnitPrice] [money] NOT NULL,
       [UnitPriceDiscount] [money] NOT NULL CONSTRAINT [ODDF_SalesOrderDetail_UnitPriceDiscount]  DEFAULT ((0.0)),
       [ModifiedDate] [datetime2] NOT NULL ,
 
 CONSTRAINT [ODPK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID] PRIMARY KEY  
(
       [SalesOrderID],
       [SalesOrderDetailID]
)
)

Here are the differences you should note regarding these two CREATE TABLE statements:

  • There are three different HASH indexes on the _inmem table, which are similar to the indexes created on the _onDisk table.  Note the BucketCount on each index, and how they are not all the same.  To find out more about the BucketCount values and how it should be sized read my article titled “Introduction to SQL Server 2014 CTP1 Memory-Optimized Tables”.
  • The _inmem table has a “WITH (MEMORY_OPTIMIZED=ON)” clause to identify it is to be created as an In-Memory OLTP table.

Other than those two differences the table statements are the same.  This demonstrates that changing a disk based table to an In-Memory OTLP table doesn’t take a lot of DDL over a disk based table.  Keep in mind that an In-Memory Tables can’t have FOREIGN KEY, CHECK constraints, or COMPUTED__COLUMNs, which were not demonstrated in the two CREATE TABLE statements.

If you explore the SQLCMD you find it creates the following stored procedures:

Sales.usp_InsertSalesOrder_inmem
Sales.usp_InsertSalesOrder_ondisk 
Sales.usp_UpdateSalesOrderShipInfo_native
Sales.usp_UpdateSalesOrderShipInfo_inmem
Sales.usp_UpdateSalesOrderShipInfo_ondisk
Demo.usp_DemoInitSeed
Demo.usp_DemoReset
Demo.usp_DemoInsertSalesOrders
Demo.usp_DemoReset

Note there are a number of different stored procedures with similar names.  Those that end with “_ondisk” are those stored procedures that reference on disk tables.  The ones that end in “_inmem” reference In-Memory OLTP tables.  The one that ends in “_native” is a natively compiled stored procedure.  By having these similar stored procedures the sample stress test will be able to demonstrate the performance difference of using disk based table’s verses In-Memory OLTP tables using a natively compiled stored procedure.  All of the stored procedures created in the Demo schema are used for driving the script to populate the samples. 

Running the Performance Tests for INSERTs Only

To prove out how much better the In-Memory OTLP tables outperform the disk based tables I will need to run a stress test.  This test will need to INSERT a number of records into the In-Memory and disk based tables and then record the elapsed time it takes to run these tests.  In order to do this I will use the OSTRESS utility. 

The OSTRESS utility is a command line utility that Microsoft has produced that is used to stress out SQL Server.  This utility will take a SQL Server command and then run it against a database multiple times concurrently.  By using this tool I can simulate many users performing the same operations.  To download this tool you can use the following link:

Performance Monitor graph

Gregory Larsen
Gregory Larsen
Gregory A. Larsen is a DBA at Washington State Department of Health (DOH). Greg is responsible for maintaining SQL Server and other database management software. Greg works with customers and developers to design and implement database changes, and solve database/application related problems. Greg builds homegrown solutions to simplify and streamline common database management tasks, such as capacity management.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles