Implementing Azure SQL Database Elastic Database Jobs

In one of our articles recently published on this forum, we have described the basic characteristics of Azure SQL Database elastic database jobs. As we pointed out, their name is somewhat misleading, since it implies that they are intended specifically for Elastic Database pools. However, their scope can include any custom-defined group of Azure SQL Databases, although the implementation in such cases is not yet available via the Azure portal but requires the use of Azure PowerShell (or the equivalent REST API). In this article, we will take you through the sequence of steps illustrating this scenario.

Elastic Database jobs compensate for the absence of SQL Server Agent in Azure SQL Database, which is commonly used in on-premises deployments to carry out ad-hoc and scheduled automated management tasks (such as rebuilding indexes, updating reference data, or collecting customer telemetry) across multiple databases. Obviously, this functionality is critical to minimizing administrative overhead, which incidentally is one of the main selling points of Azure Platform-as-a-Service (PaaS) offerings (of which Azure SQL Database is the prime example). In its most basic form, incorporated into the Azure portal, Elastic Database jobs allow you to execute Transact-SQL scripts against a collection of databases grouped together into an Elastic Database pool. With the Azure PowerShell-based implementation, you gain an additional advantage of being able to execute data-tier applications (packaged as DACPAC files) and target any Azure SQL database.

In our presentation, we will assume that you have already implemented the Azure services that provide computing and storage resources for Elastic Database jobs. More specifically, jobs run on a customer-provisioned Azure PaaS Cloud Service with one or more worker roles hosted on virtual machines, sized according to your performance and availability needs. Worker roles rely on a dedicated instance of an Azure SQL Database (referred to as control database) to store their configuration settings and job metadata. They also take advantage of a dedicated Azure Service Bus namespace that provides inter-component messaging capabilities.

With these prerequisites in place, implementing Elastic Database jobs consists of the following high level steps:

  • Signing in to the target Azure subscription (the one hosting Elastic Database jobs components and Elastic Databases). This is accomplished by running the Add-AzureAccount Azure PowerShell cmdlet. You will be prompted to provide credentials of either a Microsoft Account or an organizational account (i.e. an account defined in the Azure Active Directory instance associated with your subscription) with the Service Administrator or a co-Admin privileges (although it is possible to delegate relevant permissions to a non-privileged account by leveraging Role-Based Access Control). If the account you provided has access to multiple subscriptions, you will need to identify the name or id of the target subscription (by relying on the output of the Get-AzureSubscription cmdlet) and then select it by running the Select-AzureSubscription cmdlet.
  • Establishing connection to the control database. This requires invoking the Use-AzureSqlJobConnection with the CurrentAzureSubscription parameter. You will be prompted to provide credentials to the control database, which were assigned during its installation.
  • Defining target for job execution. This involves running the New-AzureSqlJobTarget cmdlet with either the -ShardMapManagerServerName parameter, the -CustomCollectionName parameter, or the -ServerName and -DatabaseName parameters, corresponding (respectively) to the three different types of targets, namely Elastics Scale shard maps, custom collections, and individual Azure SQL databases. The first one of these targets requires an existing shard map (which implies that you scaled horizontally a group of Azure SQL databases through sharding). The second one offers full flexibility in selecting databases to be targeted by your jobs, but necessitates populating the collection by running the Add-AzureSqlJobChildTarget cmdlet. The third one gives you the ability to run jobs directly against individual databases.
  • Storing job credentials. This step is required in order to establish database-level security context in which the job executes. The credentials get stored in the control database in an encrypted format. Start by running the Get-Credential cmdlet (at which point you will be prompted to provide the user name and the corresponding password). As a result, the credentials will be stored in a Windows PowerShell PSCredential object, which gets subsequently referenced as one of the parameters of the New-AzureSqlJobCredential cmdlet.
  • Defining execution policy. This step is optional, since it is possible to leverage the default job execution policy. Creating a custom one allows you to modify execution timeouts, initial retry intervals, maximum retry intervals, and maximum retry attempts (set by default to 1 week, 100 milliseconds, 30 minutes, 2,147,483,647, respectively). These settings are referenced by parameters of the New-AzureSqlJobExecutionPolicy cmdlet.
  • Creating a job execution schedule. This allows you to specify either recurring interval or a specific point in time that subsequently can be assigned to one or more jobs. This assignment is not direct, but instead it is handled indirectly by using a job trigger.
  • Creating a job. Specifics of implementing this step depend on whether your job runs a Transact-SQL script or a DACPAC file, although in either case you need to run the New-AzureSqlJobContent cmdlet to generate the job content, which is next referenced by the New-AzureSqlJob cmdlet.
  • Creating a job trigger. This step allows you to create a mapping between a schedule and a job and is implemented in the form of New-AzureSqlJobTrigger cmdlet, which takes two parameters -JobName and -ScheduleName.
  • Executing the job. This can be accomplished by either assigning a schedule (leveraging a job trigger as an intermediary) or by invoking the Start-AzureSqlJobExecution cmdlet. You also have the option of designating at this point a custom execution policy.

Obviously, this process is considerably more involved than that available directly from the Azure portal. However, as we mentioned earlier, the portal-based approach is applicable (at the time of writing this article) only if your intention is to deploy Transact-SQL jobs to an existing Elastic Database pool. In our upcoming articles, we will provide an example of the PowerShell based implementation.

See all articles by Marcin Policht

Marcin Policht
Marcin Policht
Being a long time reader, I'm proud to join the crowd of technology gurus gathered here. I have a fair share of Microsoft exams behind me, a couple of acronyms after my name - MCSE, MCSD, and MCT, decent familiarity with majority of MS BackOffice products (SMS, SQL, Exchange, IIS), programming and scripting languages (VB, C++, VBScript with wsh) and several years of practical experience with Windows environment administration and engineering. My focus these days is on SQL Server and Windows 2000, and I'll attempt to share the most interesting experiences with these products.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles