Database Journal
MS SQL Oracle DB2 Access MySQL PostgreSQL Sybase PHP SQL Etc SQL Scripts & Samples Tips Database Forum

» Database Journal Home
» Database Articles
» Database Tutorials
MS SQL
Oracle
DB2
MS Access
MySQL
» RESOURCES
SQL Scripts & Samples
Tips
» Database Forum
» Slideshows
» Sitemap
Free Newsletters:
DatabaseDaily  
News Via RSS Feed


follow us on Twitter
Database Journal |DBA Support |SQLCourse |SQLCourse2
 

Featured Database Articles

MS SQL

Posted May 22, 2017

WEBINAR: On-demand webcast

How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER >

Getting Started with Azure SQL Data Warehouse - Part 5 - Page 2

By Arshad Ali

Workload Management

If you have experience working with SQL Server, you might have used Resource Governor to manage the workload in SQL Server to make resources available to all the requests and better utilize the resources. In a similar line, SQL Data Warehouse also has workload management capability built in. This capability allows you to assign an incoming request to have pre-configured resources (like memory and concurrency) so that they don’t get starved forever and SQL Data Warehouse better utilizes system resources to achieve the best performance for concurrent requests.

In SQL Data Warehouse, a resource class is a built-in server role that has pre-assigned limits for memory and concurrency and varies from different DWU tiers. Though irrespective of DWU tiers, there are four different resource classes and resources for these resource classes are mentioned in “Concurrency and workload management in SQL Data Warehouse” for each of the tiers. SQL Data Warehouse allocates resources to requests according to the resource class server role membership of the login that submits the requests.

SELECT 
       r.name AS [Member Name],
       m.name AS [Resource Class]
FROM    sys.database_role_members rm
JOIN    sys.database_principals AS r            ON rm.role_principal_id       = r.principal_id
JOIN    sys.database_principals AS m            ON rm.member_principal_id     = m.principal_id
WHERE    r.name IN ('mediumrc','largerc', 'xlargerc');

Default or small resource class will be used if the login executing the query has not been assigned to any other resource classes. You need to assign a login to the higher resource class if the query it’s going to execute is very expensive. For example, you can consider using mediumrc (medium resource class) for CTAS operations that have large hash joins or SELECT operations that need more memory to avoid caching to disk or building, rebuilding, and reorganizing clustered columnstore indexes for smaller tables that have 10-15 columns, etc. Likewise you might consider using largerc (large resource class) if your query is more expensive than that, for example, if you are running very heavy CTAS operations that have huge hash joins, contain large aggregations (ORDER BY or GROUP BY clauses) or SELECT operations that require very large amounts of memory for operations like hash joins, aggregations (ORDER BY or GROUP BY clauses). Please be careful when using xlargerc (extra-large class) and use when a request requires extra-large resource consumption at run time.

Please note, the higher resource class you use for executing queries, the more resources it’s going to consume, limiting the maximum number of concurrent queries. Hence, I would suggest to you to go through “Concurrency and workload management in SQL Data Warehouse in detail and decide on a resource class based on DWU tier and concurrency need for your workload.

Performance Optimization Guideline

Although the Massively Parallel Processing (MPP) engine of SQL Data Warehouse does a fabulous job in executing your query in parallel across multiple nodes and provides faster response time, this section covers some of the controls that you have at hand to optimize the performance to meet your performance expectation.

  • Design
    • Make sure you have the right collation setting for your SQL Data Warehouse database when you are creating it. Changing collation after database creation is not supported as of now.
    • SQL Data Warehouse does not support cross database queries. Hence, during your migration you can migrate tables from all your databases to a single SQL Data Warehouse database. We used this naming pattern for objects in SQL Data Warehouse and it worked fine in migration. <PreviousDatabaseName_PreviousSchemaName>.<TableName>.
    • As you can notice, the previous database and schema name now become a concatenated schema name in SQL Data Warehouse. This also requires you to change all references of these objects wherever applicable (like in views, stored procedures, etc).
    • Choose the right hash distribution key column for a hash distributed table to avoid data skewness and possible data movement. See “Getting Started with Azure SQL Data Warehouse - Part 3” for some guidelines on deciding the right distribution key for a table.
    • Even though you have a distribution key on the same column for multiple tables, there might be possible data movement if the data type for this column is not exactly the same as the data type for the column on the other table. This happens because SQL Data Warehouse must do implicit type conversion before joins and hence you need to make sure you use distribution column in joins (or aggregations) with same data type to avoid implicit type conversion, eg. <int>=<bigint>.
    • Consider using smallrc for all select queries, use mediumrc or largerc for bigger data loads (especially loading directly to a CCI table or rebuilding CCI).
    • During data load or during maintenance you can scale your database to a higher DWU tier and once done scale it down. This way you can balance cost and performance.
    • As of this writing, SQL Data Warehouse supports a maximum 32 concurrent queries, in cases where you expect a large number of concurrent queries, consider using SQL Server Analysis Services (SSAS) or Azure Analysis Services as a caching layer and to support higher traffic.
    • SQL Data Warehouse allows you to specify a label for your query; don’t be shy to use it. Use it extensively as this will help you in troubleshooting. Make sure though you have defined a naming convention for these labels so that it’s not confusing to you or to others who are working on troubleshooting.
    • Though its applicable for any platform, particularly in SQL Data Warehouse it’s more important to use the smallest possible column size in your table definition. This is because Data Movement Service (DMS) uses fixed length as of maximum size (even the column data type is variable) when moving data across. This is very much like how data buffers work in SQL Server Integration Services (SSIS).
    • Likewise, whenever not needed don’t use NAVARCHAR or NCHAR, instead use VARCHAR or CHAR.
    • SQL Data Warehouse has several DMVs which can be used to monitor query execution. See “Monitor your workload using DMVs” for more detail on these DMVs.
  • Statistics
    • Whenever you face any performance issue, the first and foremost thing to check is the presence of the statistics and if those are up to date for all those involved tables. In several of my cases, only ensuring up to date statistics on involved tables improved the performance of the queries several times.
    • When there is a large data change in the table, you might consider re-creating or updating statistics on that table.
    • There are DMVs available to find out the difference between control and compute nodes and if the difference is say 5% or 10% (test it in your case to arrive on this number applicable in your specific case) then only update the statistics. This way you can optimize your load process and offload statistics update from regular data load process to maintenance process, which often runs in off-peak hours anyway.
    • You can start with creating statistics on all the columns of the table and gradually identify strategic columns used in joins, aggregation, filter and sorting and maintain statistics for those columns only. This way you can minimize the time it takes for re-creating or updating statistics on that table.
  • Clustered Columnstore index (CCI)
    • By default, every table gets created with CCI in SQL Data Warehouse; don’t change it unless you have a table with a considerably less number of rows.
    • When you have lots of updates or deletes on a table with CCI, over a period of time that table will have lots of ghost rows in the delete bitmap (a row is not deleted immediately from a table with CCI, it is marked as logically deleted and doesn’t reclaim physical storage from that row until the index is rebuilt) - sometime referred to as index fragmentation - this can cause performance issues for your queries and data load processes. To get rid of these unwanted rows or to remove fragmentation, you can either recreate the table with CTAS or rebuild the CCI.

      To identify when an index rebuild is needed, there are couple of Dynamic Management Views (DMVs) or Catalog Views in SQL Data Warehouse. You can use these DMVs to identify the percentage of deleted rows and if the percentage is beyond some threshold (like 10% or 15% - you can decide based on your workload and performance needs) you can consider index rebuilding.

    • When you are rebuilding CCI, do it with a medium or large resource class, depending on the size of the table and DWU tier.

  • Data Load Process
    • During data refresh (ETL or ELT), normally we first bring data to a staging table and then transform and move to the final table (dimension or fact) on a defined interval. In this case, you can create the staging table as HEAP (as opposed to default CCI) and evaluate the performance. This should improve performance, as normally data refresh on interval will not have large data to get the benefit of CCI. Also, often you will be querying all the data (all columns) from the staging table to the final table.
  • Partitioning
    • When using partitioning, remember that SQL Data Warehouse already distributes data across 60 distributions and hence you should use it at a higher level to ensure each of these partitions get at least 1 million rows for better compression in CCI – leading to better performance.
    • In the case of SQL Server, you can switch-in a non-partitioned staging table to a partition of a partitioned table if the non-partitioned table has CHECK constraint defined. However, because there is no support for CHECK constraint in SQL Data Warehouse, you can only switch in from one partitioned table to another partitioned table (one partition at time) and hence make sure your staging table has the same partitioning strategy as your final table. Also, the distribution type and distribution key should match.
    • You can consider having narrower partitions for recent data (for example daily partitions in the current month) and wider partitions for older data (for example, monthly partitions for previous months in current year and yearly partitions in all earlier years) based on your query pattern.
    • For a large fact table, you can use partition switch-in to load data into it. Likewise, to delete or archive the data you can use partition switch-out. This way you can minimize the time and impact to other queries referencing this table.
  • Migration
    • During historical data load when you have a very large table to migrate, I would suggest moving data into chunks; this way your migration is more manageable and you will have better restart-ability from the point of failure in case of any failure. If you have a partitioned table, you can migrate data for each partition as one chunk (may run multiple processes to run them in parallel for quicker migration though). Once you have data imported for all the partitions in internal staging tables within SQL Data Warehouse, you can use partition switch-in to combine them together in your final table. When you use this technique, just make sure your internal staging tables (which hold data for each of the partitions) have exactly the same distribution type, distribution key and partitioning strategy as your final table.

      If your source table is not partitioned or if you don’t want to create your final table as a partitioned table, then you can use CTAS to create your final table by combining data from all internal staging tables (with UNION ALL). But this process is going to be slow and might face memory out of exception. In that case, you can consider scaling your SQL Data Warehouse to a higher DWU tier and consider using largerc or xlargerc when running CTAS.

  • Polybase
    • Currently, Polybase supports up to 1MB of row width, which will be sufficient in most of the cases. In case you have a very wide row, then you can consider doing vertical partitioning of your data into multiple tables and use the same distribution key to avoid any data movement when joining these tables.
    • If you have Unicode data, you can use Encoding = ‘UTF16’ in external file format to import the data into.
    • If your data has embedded lines or a line feed in it, currently you have two choices. You can replace those values before exporting data out from your source or use another file format like RCFILE.
    • As discussed in my earlier article, Polybase is designed to move data in parallel and hence it’s recommended to move data into multiple smaller files rather than moving it in a large single file. When you export data out from your source, export it into 60 or more files rather than exporting it in a single or smaller number of files.

    Conclusion

    In this article of the series, we looked the importance of statistics, how workload management works and how to go about performance optimization in SQL Data Warehouse and different aspects that impacts query performance.

    Resources

    Getting Started with Azure SQL Data Warehouse - Part 1

    Getting Started with Azure SQL Data Warehouse - Part 2

    Getting Started with Azure SQL Data Warehouse - Part 3

    Getting Started with Azure SQL Data Warehouse - Part 4

    Design for Big Data with Microsoft Azure SQL Data Warehouse

    What is Azure SQL Data Warehouse

    SQL Data Warehouse free trial

    See all articles by Arshad Ali



MS SQL Archives

Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 




Latest Forum Threads
MS SQL Forum
Topic By Replies Updated
SQL 2005: SSIS: Error using SQL Server credentials poverty 3 August 17th, 07:43 AM
Need help changing table contents nkawtg 1 August 17th, 03:02 AM
SQL Server Memory confifuration bhosalenarayan 2 August 14th, 05:33 AM
SQL Server Primary Key and a Unique Key katty.jonh 2 July 25th, 10:36 AM