Introduction to Azure SQL Database Scalability

One of the primary advantages of Platform-as-a-Service (PaaS) solutions offered by Microsoft Azure is the ease with which scaling can be implemented. This typically applies to both vertical and horizontal scaling, allowing you to change (respectively) the current pricing tier (along with the corresponding performance targets and associated supplemental features) as well as the number of simultaneously running instances. While SQL Database facilitates both approaches, scaling it out is considerably more challenging, especially when compared with such stateless services such as Azure Websites or Cloud Services. In this article, we provide a high-level overview of both vertical and horizontal scaling methods available with Azure SQL Database.

Let’s start by reviewing the scaling up capabilities. Since this involves switching between pricing tiers and performance levels, we need to first identify the criteria used to distinguish between them. In order to simplify this differentiation, Microsoft introduced a new measurement, referred to as Database Throughput Unit (or simply DTU) that is intended to represent general Azure SQL Database performance levels, which take into consideration a combination of processing power, amount of available memory, and I/O throughput. Each pricing tier (as well as individual performance levels within it) are allocated a specific number of DTUs. Each one is also a subject to the maximum database size limits, ranging from 2 GB to 500 GB. In addition, it is worth noting that higher tiers offer extra resiliency by supporting asynchronous replication of committed transactions to other Azure regions (up to four), including the option to keep secondary replicas online for read access (which might be useful for reporting and data analysis). Another distinguishing factor is recoverability. While all SQL Databases are automatically backed up in regular intervals (with full backups taking place weekly, differential daily, and incremental happening every five minutes), the corresponding Point In Time Restore window during which backups can be restored varies depending on the pricing tier.

Effectively, Azure offers SQL Databases in the following three pricing tiers (that replaced Web and Business tiers, which have been retired at this point), each associated with different performance characteristics and resiliency-related features:

  • Basic – designated as B in the Preview portalis geared towards small, single-user databases, typically ideal for development and testing. It is limited to 5 DTUs, its size cannot exceed 2 GB, and its Point In Time Restore window is limited to 7 days.
  • Standard – constitutes the most common choice for production databases supporting workgroup and web applications. It scales up to 250 GB in size, with support for Standard Geo-Replication and Point In Time Restore window of 14 days. It also offers three performance levels, labeled as S0, S1 , and S2 in the Preview portal, which give you, respectively, the performance levels of 10 DTUs, 20 DTUs, and 50 DTUs.
  • Premium – is intended for enterprise grade applications with high concurrency levels. It scales up to 500 GB in size, with support for Active Geo-Replication and Point In Time Restore window of 35 days. Its performance levels range from P1, P2 , and P3 (as displayed in the Preview portal) yielding, respectively, the performance levels of 100 DTUs, 200 DTUs, and 800 DTUs.

The pricing tier and the performance level can be changed practically instananeously from either Azure management portal or the Preview Portal (although users might experience temporarily dropped connections during the switch, so you should code your applications to handle such drops gracefully). The process can be automated by leveraging the Set-AzureSqlDatabase PowerShell cmdlet with the -ServiceObjective and -Edition parameters (representing the performance level and pricing tier). When scaling down, ensure that the database complies with the restrictions of the target pricing tier (in particular, in regard to the maximum size and replication scope).

In addition to vertical scaling (which, understandably, is a subject to limitations imposed by pricing tiers and performance levels), you also have an option to scale horizontally by splitting designated tables across multiple database instances (referred to as partitions). This approach, also referred to as sharding is significantly more challenging, since is not directly facilitated by the Azure platform, but instead it requires custom design and development, frequently necessitating modifications to existing database structure and database-bound applications. In general, there are there different methods of implementing sharding:

  • Federations – involve defining a federation scheme that includes a federation distribution key, which contains values of the data type used to differentiate between partitions (such as a customer ID or a product ID). The key determines the partition where a specific data record should be allocated to. The federation key is part of the primary key in federated tables split across individual databases that the federation consists of. Federation metadata is stored in the federation root database, which serves as the logical endpoint for applications that access federated data (incidentally, a single root database can contain metadata about multiple federations). Effectively, the actual data location is irrelevant from the application perspective. Note that individual databases that are federation members typically also contain non-federated (also known as reference) tables, which are not part of the federation. Such tables usually store additional data that supplements the federated content.

    It is important to realize that Azure-based federations have been deprecated along with the legacy pricing tiers (Web and Business). This leaves you with the other two horizontal scaling solutions for any new deployments.

  • Custom Sharding – similar to federations, requires a considerable amount of design and development work in order to implement custom methodology for data partitioning, distribution, and connection routing. You can find more information regarding this subject on the Microsoft Azure Blog.
  • Elastic Scale – currently in preview, leverages templates incorporated into .NET libraries to implement a database management infrastructure, which allows you to simplify the process of data partitioning (both fhorizontal and vertical) as well as request routing. The implementation can be automated via PowerShell scripts and Azure Automation Service. Data-bound applications can take advantage of the Elastic Scale APIs when accessing sharded databases. The solution also includes a migration utility, allowing you to transition from your existing federation deployments.

This concludes our introduction to the scalability features of Azure SQL Database. We will be focusing on some of the more challenging aspects of the horizontal data partitioning solutions that leverage Elastic Scale in our upcoming articles published on this forum.

See all articles by Marcin Policht

Marcin Policht
Marcin Policht
Being a long time reader, I'm proud to join the crowd of technology gurus gathered here. I have a fair share of Microsoft exams behind me, a couple of acronyms after my name - MCSE, MCSD, and MCT, decent familiarity with majority of MS BackOffice products (SMS, SQL, Exchange, IIS), programming and scripting languages (VB, C++, VBScript with wsh) and several years of practical experience with Windows environment administration and engineering. My focus these days is on SQL Server and Windows 2000, and I'll attempt to share the most interesting experiences with these products.

Latest Articles