Migrating SQL Server Databases to Windows Azure by Leveraging BACPAC Files

So far, we have been operating under the assumption that our existing schema and data are not fully compatible with the requirements imposed by the Azure Software as a Service (SaaS) platform. We have described methods that can be used to identify and remediate such incompatibilities and illustrated how they can be applied to a sample database (refer to Migrating SQL Server Database to Windows Azure – Resolving Incompatibility Issues for more details). Now we will take a different approach and consider a scenario in which we intend to migrate on-premise data that is already cloud-compliant. To accomplish our objective, we will employ the process of converting it into a BACPAC formatted file.

The BACPAC format was introduced as an extension of the DAC (data-tier application) format, which facilitates automated deployment of SQL Server objects (including databases and instances) by converting them into self-contained .DACPAC files. The original benefit of .BACPAC was based on its ability to contain data (in addition to schema incorporated into .DACPAC files), which provided a convenient mechanism for performing cross-premise and intra-cloud database transfers. This advantage, however, has been eliminated over a year ago (since September 2012 Release of Data-Tier Application Framework, which extended the DACPAC format to allow for inclusion of data). Considering that .DACPAC files can also be imported into SQL Server Data Tools projects for the purpose of schema modifications and other development work, the primary reason for using the .BACPAC format is its support in SQL Server 2012 Management Studio and Windows Azure Management Portal (which we will demonstrate shortly). In our presentation, we will be working with our slightly modified version of the AdventureWorks2012 database (refer to our previous post for instructions on its implementation), but you are obviously free to use any database as long as it is Azure-compliant.

In order to export the schema and data from your sample database, display its context-sensitive menu in the Object Explorer window of SQL Server Management Studio. In the menu, you will find the Task item, which will expand into the submenu with the Export Data-tier Application item. Selecting it will activate a simple wizard. On the Settings tab of its Export Settings page, you will need to point to a location where the resulting BACPAC file should be stored. You can Save to local disk or Save to Windows Azure. In case of the latter, you need to provide an existing Azure storage account, its container, and a name that the file will be given. You can also change the default location of the temporary file that is used during BACPAC generation. The Advanced tab allows you to designate tables that will be included in the export.

For the sake of simplicity, we’ll choose the second of the two locations and create BACPAC file directly in the Azure storage account. In case you do not have one readily available, launch your favorite Internet browser and sign on to the Windows Azure Management Portal. Click on the New entry in the lower left corner and navigate through Data Services and Storage to Quick Create menu items. In the URL textbox, type in the name of a subdomain (note that you can use only lower case letters and numbers) that will represent the namespace hosting your storage services. For example, if you choose mystorageaccount (this is purely hypothetical, since, as you can easily verify by looking at the visual indicator on the right side of the textbox, this name is already taken, so you will need to come up with something unique), then your Blobs, Tables, and Queues services would, respectively, have their endpoints set to http://mystorageaccount.blob.core.windows.net, http://mystorageaccount.table.core.windows.net, and http://mystorageaccount.queue.core.windows.net. In the same interface, you also need to assign Location/Affinity Group. The former designates a region where the data center that will host your storage is located. The latter is a custom representation of a partition within a data center and can be created by clicking on the Add entry on the Affinity Groups tab in the Settings view in Management Portal. Either of these two options allow you to ensure colocation of your data with other Windows Azure services, which is beneficial for performance (due to minimized network latency), and cost (since data center egress traffic incurs additional charges) reasons. While affinity groups are in general preferable (since they offer a more favorable degree of proximity when compared with the location-based approach), their benefits are not relevant in this particular context, since their scope is limited to Windows Azure compute and storage (so they are not applicable to SQL Databases). The third setting that you need to decide on when creating a storage account is Replication type. In general, you have the following choices:

  • Locally Redundant – a discounted option which offers data replicas within the same data center.
  • Geo-Redundant – the default that ensures that a copy of existing data resides in a different data center within the same region.
  • Read Access Geo-Redundant – in preview at the time of writing of this article, allows for read-only access to a replica hosted in a separate data center in the same region.

After you follow the Create Storage Account link in the lower right corner of Azure Management Portal, you will need to wait a short while (typically less than a minute) for the new account to become available, as indicated by its Online Status (keep in mind that you are limited to five storage accounts per subscription). At that point, you can further explore its properties by clicking on the new entry appearing in the Name column in the Storage view. On the Dashboard pane, you will find the list of endpoints for each of three services (Blobs, Tables, and Queues). The Monitor tab allows you to track utilization statistics (collecting them will likely affect your subscription costs). The Configure tab allows you to switch between Replication options as well as turn on monitoring and logging. Take a note of the Manage Access Keys icon displayed at the bottom of the window. Click on it and, in the resulting window, copy the Primary Access Key to the clipboard. We will need it to provide access to the storage account from within the SQL Server Management Studio when exporting our sample BACPAC file to the newly created storage account. Next, switch to the last tab labeled Containers in order to create a container where blob data will be stored (and accessible via the http://storage_account_name.blob.core.windows.net/container_name/blob_name URL). Create a Container link, which will launch the New container form, where you need to specify a name (between 3 and 63 lowercase characters, with numbers and hyphens allowed) and the desired Access control mechanism, which can take one of the following values:

  • Private – accessible to the account owner only.
  • Public Blob – available via anonymous requests, but limited to blobs within the container (without the ability to enumerate them).
  • Public Container – available via anonymous requests, including container and blob data (with the ability to enumerate blobs).

The first of these options is well suited for our sample scenario. Once you select it and click on the OK button, the new container will be provisioned within a few seconds. Now we are ready to return to the Export Settings page of the Export Data-tier Application wizard within SQL Server Management Studio. Select the Save to Windows Azure option and click on the Connect command button. In the resulting Connect to Windows Azure Storage type in the name you assigned to your account in the Storage account textbox (this should be just a single label name, without the http:// prefix or the core.windows.net suffix) and paste the access key you copied earlier in Windows Azure Management Portal into the Account Key textbox. Leave the two checkboxes labeled Use secure endpoint (HTTPS and Save account key) with their default settings (the first one enabled and the second disabled). Once you return to the Export Settings page of the wizard, you should be able to select the name of the newly created container in the Container listbox. Click on the Next command button to review the summary of your settings and complete the wizard by clicking on the Finish command button. Assuming that the export is successful, you should see the .bacpac file in the container while browsing its content in the Windows Azure Management Portal. Copy its URL since you will need it shortly during the import operation.

Switch back to the Management Portal and click on the New icon in the ribbon. Navigate from Data Services through SQL Database to the Import entry. Selecting it will trigger the Import Database window, where you will need to paste the recently copied BACPAC URL, type in a Database Name (it will default to match the one you have exported), and the target Azure-resident instance of SQL Server (including admin credentials). You can also opt to Configure Advanced Database Settings, which include the database edition (Web or Business) and the corresponding maximum size. The new database should appear with its Online Status in the portal interface once the import is completed.

If you decide to store the .bacpac file locally when running the Export Data-tier Application wizard, you will need to upload it to the blob container in Windows Azure afterwards. This can be accomplished directly from the Server Explorer window of Visual Studio 2012 or Visual Studio 2013 by leveraging the functionality incorporated into Windows Azure SDK 2.2 (available from Downloads section of the Windows Azure web site). Alternatively, if you are looking for a command line workaround, you can take advantage of AzCopy downloadable from the MSDN Web site.

This concludes our overview of the BACPAC files in the context of migrating SQL Server databases to Windows Azure. We will continue our review of methods that provide the ability to transfer data to and from SQL Databases in our upcoming articles.

See all articles by Marcin Policht

Marcin Policht
Marcin Policht
Being a long time reader, I'm proud to join the crowd of technology gurus gathered here. I have a fair share of Microsoft exams behind me, a couple of acronyms after my name - MCSE, MCSD, and MCT, decent familiarity with majority of MS BackOffice products (SMS, SQL, Exchange, IIS), programming and scripting languages (VB, C++, VBScript with wsh) and several years of practical experience with Windows environment administration and engineering. My focus these days is on SQL Server and Windows 2000, and I'll attempt to share the most interesting experiences with these products.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles