Backing Up SQL Server Databases Hosted on Azure VMs

In the past we’ve discussed the automated backup capabilities of SQL Server instances deployed to Azure virtual machines. These capabilities rely on either a dedicated Azure VM Agent extension or on system stored procedures introduced in SQL Server 2014, in order to transfer backup files directly into Azure Storage blobs. In this article, we will provide a more comprehensive overview of the different backup options applicable to Azure IaaS SQL Server workloads.

In general, you have the following options for backing up databases residing on SQL Server instances that are hosted by Azure virtual machines:

  • Native SQL Server backup directly to an Azure Storage blob. This functionality, commonly referred to as backup to URL, was introduced in SQL Server 2014 and further enhanced in SQL Server 2016. The previous version of SQL Server supported only page blobs. The current one allows you to choose between page and block blobs. The choice of the blob type has implications in regard to the maximum backup size and throughput. More specifically, when using page blobs, the resulting backup cannot be larger than 1 TB, since this is the maximum page blob size. In addition, Standard storage throttles per-blob throughput at about 60 MB/sec (backup to Premium storage page blobs is currently not supported). You can eliminate these restrictions by using block blobs instead. With this approach, it becomes possible to stripe individual backups across up to 64 block blobs. While the maximum supported size of a block blob is currently 4.77 TB, SQL Server interacts with Azure Storage via an older API that restricts the size of a block blob to about 195 MB, yielding the maximum backup size of 12.8 TB. By virtue of relying on striping for disk writes, you also benefit from increased throughput.
  • Native SQL Server backup to a local file system volume.Such volume is created by using one or more virtual disks attached to the Azure VM where the SQL Server instance is hosted. Starting with Windows Server 2012, to create multi-disk volumes in Azure VMs, it is recommended to use Storage Spaces with a simple layout, which corresponds to traditional disk striping without parity. To improve performance, you should ensure that the number of columns in a stripe matches the number of underlying disks. For up to 8 disks, this is configured automatically when creating a Storage Spaces-based volume via Server Manager, but when dealing with a larger number of disks, you will need to use Windows PowerShell instead and explicitly specify the number of columns. The capacity of the volume is determined by the number of disks and their individual sizes. The maximum size of a disk matches the maximum size of a page blob (1 TB), since virtual disk files are stored as page blobs. The largest Azure VMs (G5 and GS5) support up to 64 data disks.

    When attaching data disks to the Azure VM, you have the choice of using either managed or non-managed disks. The first of these methods eliminates the need to consider storage account constraints. For example, with non-managed Standard storage disks, you need to keep in mind that a single storage account has the limit of 20,000 IOPS, which restricts the number of disks that should be stored in the same storage account to 40 (since each Standard storage disk offers roughly 500 IOPS). With managed disks, placement of disks in an optimal manner across multiple storage accounts is handled automatically by the platform.

    From the disk I/O standpoint, multi-disk volume characteristics depend on the storage type. With Standard storage, the overall throughput of a single volume is an aggregate of the throughput of individual disks. This means that in order to calculate the maximum throughput, you simply need to multiply the number of disks by 60 Mbps. With Premium storage, in order to determine the supported throughput, you need to take into account not only the number of disks and their individual I/O characteristics, but also the Azure VM size (the throughput and request rate are throttled on the virtual machine level – for specifics, refer to Microsoft Azure documentation). Note that, in the context of a local SQL Server backup, the reads from the source disks as well as writes to the target disks count towards the Azure VM bandwidth limit.

    In order to protect the local backups, you should copy them to a long term storage across two Azure regions. One way to accomplish this is to use Azure Backup with geo-redundant Recovery Services vault. While you can potentially place the virtual machine disk files in a geo-replicated Standard storage account (Premium Storage supports only the locally redundant replication option), this applies only to single disk volumes (multi-disk Storage Spaces-based volumes with disks stored in geo-replicated Azure Storage accounts are not supported), which limits the backup size to 1 TB.

    An interesting variant of this approach involves backup to the local SSD-based temporary disk, which is available with the majority of Azure VM sizes. While it is important to remember that the temporary disk (D: drive on Windows operating system) should not, as the name indicates, be used for any content that must persist throughout the Azure VM lifetime (including such events as a failure of the underlying Hyper-V host), there is a significant performance benefit to relying on it as an intermediate store for local backups (which you can subsequently copy to a persistent, long-term storage) when using Premium Storage data disks. The benefit results from the fact that the usage of local SSD disks does not count towards the Azure VM Premium Storage-related I/O constraints mentioned above. This means that you might be able to fully utilize the throughput of the local SSD storage without affecting the throughput of the Premium Storage-based disks. This applies as long as you do not rely on the local SSD storage for Premium Storage caching.

  • SQL Server file-level backup leverages the ability to work independently with individual SQL Server data files and file groups. As a result, you have more flexibility in scheduling backup operations and placement of backup files. On the other hand, this approach also results in increased management overhead, so the relevance of its advantages should be carefully considered. For more information, refer to Microsoft Docs.

  • Azure Storage-based snapshot backup involves taking a snapshot of individual blobs representing individual Azure VM disks. While convenient and fast (a snapshot is practically instantaneous), this approach has a number of limitations and its viability for protecting SQL Server workloads is relatively limited. More specifically, snapshots are disk consistent, which means that in order to preserve the transactional integrity of your databases, you should always stop SQL Server prior to carrying out this operation. In addition, there is no support for transaction log backups and restores, so effectively SQL Server should be used for databases configured with Simple recovery mode only. This, in turn, increases the Recovery Point Objective (RPO) along with the associated potential data loss. Finally, note that there is increased complexity when dealing with multi-disk Azure VMs, since each disk must be handled separately. For more information regarding creating a snapshot of a blob, refer to Microsoft Docs.
  • Azure VM-level backup is an option included as part of the Azure Backup offering, which eliminates some of the shortcomings of the Azure Storage-based snapshot backups described above. It performs an application-consistent backup of all disks attached to an Azure VM by leveraging the Volume Shadow Copy Service (VSS) running within the operating system being backed up. However, at the time of writing this article, it supports only virtual machine-level restores. It also is limited to a single scheduled backup per day. You might, however, find it handy in scenarios that require creating a replica of your existing deployment (for example, for testing or development purposes).
  • Microsoft Azure Backup Service is another approach to back up application workloads that leverages Azure Backup capabilities. In the context of the topic covered in this article, it involves deployment of an Azure VM on which you install the Microsoft Azure Backup Service (MABS) component. MABS is functionally equivalent to System Center Data Protection Manager (with the exception of support for tapes, since in this case Azure Recovery Vault serves as a long term storage). It provides support for application consistent backups in the disk-to-disk-to-cloud configuration. For more information, refer to Microsoft Azure Documentation.

In our upcoming articles, we will review these options in more detail.

See all articles by Marcin Policht

Marcin Policht
Marcin Policht
Being a long time reader, I'm proud to join the crowd of technology gurus gathered here. I have a fair share of Microsoft exams behind me, a couple of acronyms after my name - MCSE, MCSD, and MCT, decent familiarity with majority of MS BackOffice products (SMS, SQL, Exchange, IIS), programming and scripting languages (VB, C++, VBScript with wsh) and several years of practical experience with Windows environment administration and engineering. My focus these days is on SQL Server and Windows 2000, and I'll attempt to share the most interesting experiences with these products.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles