dcsimg
Database Journal
MS SQL Oracle DB2 Access MySQL PostgreSQL Sybase PHP SQL Etc SQL Scripts & Samples Tips Database Forum

» Database Journal Home
» Database Articles
» Database Tutorials
MS SQL
Oracle
DB2
MS Access
MySQL
» RESOURCES
SQL Scripts & Samples
Tips
» Database Forum
» Slideshows
» Sitemap
Free Newsletters:
DatabaseDaily  

By submitting your information, you agree that databasejournal.com may send you databasejournal offers via email, phone and text message, as well as email offers about other products and services that databasejournal believes may be of interest to you. databasejournal will process your information in accordance with the Quinstreet Privacy Policy.

News Via RSS Feed


Database Journal |DBA Support |SQLCourse |SQLCourse2
 

Featured Database Articles

MS SQL

Posted January 4, 2018

WEBINAR:
On-Demand

How to Help Your Business Become an AI Early Adopter


Azure Cosmos DB Consistency Levels

By Marcin Policht

The majority of traditional database management systems are responsible for making sure that any changes to their data are consistently reflected in the results of subsequent queries. This behavior exemplifies strong consistency, which implies that multiple, concurrent processes will always have the same view of database content. While such assurances are frequently desirable, they involve a number of tradeoffs, which negatively affect performance, scalability, and availability. The extent of tolerance for these tradeoffs has been decreasing as distributed systems have become more prevalent, driven primarily by expansion of cloud-based technologies. This trend underscored the need for weak consistency, in which it is acceptable for each of concurrent processes to have a widely different view of database content, as long as that view becomes consistent with time (which is the reason for describing this model by using the term eventual consistency). Azure Cosmos DB further extends the range of consistency options by providing support for bounded-staleness, session, and consistent prefix models. The purpose of this article is to provide their overview.

Strong consistency has been a long-established approach to providing shared access to data hosted by relational databases. Its implementation leverages serialization and locking, along with the transactional model to block concurrent, potentially conflicting data modifications. Pessimistic locking assumes a worst-case scenario and prevents read access to the content being modified for the duration of the corresponding transaction. While these techniques are relatively efficient (despite the possibility of deadlocks) and quite effective when applied to single database instances, they are rarely suitable when dealing with partitioned, distributed data stores, common in cloud environments. In such cases, network latency associated with distributed transactions renders strong consistency impractical. Sharding and replication provide workarounds that help improve performance and availability, however, sometimes it becomes necessary to relax consistency requirements.

The most common way of addressing this need is by implementing eventual consistency. In the case of NoSQL databases, such implementation relies on versioning as well as read and write quorums to eliminate the need for distributed transaction locks. Versioning mitigates the potential for lost updates that might surface in the absence of pessimistic locking. Concurrent processes that attempt to make a change to the same content rely on version information to verify that data they read prior to the change is still current. It is possible to enclose these subsequent reads and writes into a single atomic operation in order to eliminate the possibility of lost updates, although this might result in decreased concurrency levels. The quorum mechanism minimizes inconsistencies, while at the same time, improves performance and allows for increased scalability and availability. Its basic premise is that, in many cases, it might be sufficient to determine results of a query or finalize a data change based on the state of a subset of replicas of a distributed database, rather than requiring that all of them are taken into account. While this obviously does not guarantee strong consistency, it offers the ability to provide a reasonable tradeoff between performance and linearizability. The degree of this tradeoff depends on the number of replicas in the read and write quorums.

Cosmos DB leverages versioning and quorum-based reads and writes in order to provide support for five different consistency levels:

  • strong - this level is equivalent to the traditional approach to database consistency, ensuring that reads performed by multiple, concurrent processes are guaranteed to return the same, most recent version of data changes.
  • bounded-staleness - in this case, reads might be inconsistent, however, the database engine guarantees that that inconsistency will not exceed the threshold that you specify (expressed in the number of versions or a time interval).
  • session - this option is geared towards scenarios where it is important to guarantee that individual processes or applications will be able to read their own writes. Outside of the scope of the current session, reads and writes are subject to eventual consistency.
  • consistent prefix - by selecting this option, you ensure that sequence of writes is always reflected during subsequent reads.
  • eventual - with eventual consistency, all writes are at some point propagated to all replicas, however, there are no guarantees regarding degree of staleness of data returned by reads.

In the case of Cosmos DB, in order to implement strong consistency, the account must be associated with a single Azure region. The remaining consistency levels do not impose any restrictions regarding the location or number of Azure regions associated with the Cosmos DB account. You can assign the desired consistency level directly to the Cosmos DB account, which will effectively apply to all of its collections and databases. You also have the option of explicitly requesting a specific consistency level for individual read requests. Consistency level for queries against user-defined resources (documents and attachments) matches by default the consistency level assigned on the account level. However, you have the option of modifying indexing mode on per-collection basis in order to further customize performance of read and write operations. For example, by choosing the lazy indexing mode, you increase the speed of bulk writes. Note that this will result in an eventual consistency level for queries against user-defined resources in the corresponding collection.

In addition to its impact on performance, scalability, and availability, the choice of the consistency level also has also pricing implications. In particular, read operations with strong or bounded staleness consistency are more expensive than those with session, consistent prefix, or eventual consistency. This correlation reflects the pricing model of Cosmos DB, which is based on request units (RUs). A single request unit represents the throughput of a single GET operation for a document of 1 KB in size.

Lastly, the choice of the consistency model has indirect impact on availability. In particular, with any consistency level other than strong, you can configure a Cosmos DB account to span two or more Azure regions, which gives you 99.999% availability SLA. With an account associated with a single Azure region, which is required when using a strong consistency level, the corresponding SLA is 99.99%.

See all articles by Marcin Policht



MS SQL Archives

Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 




Latest Forum Threads
MS SQL Forum
Topic By Replies Updated
SQL 2005: SSIS: Error using SQL Server credentials poverty 3 August 17th, 07:43 AM
Need help changing table contents nkawtg 1 August 17th, 03:02 AM
SQL Server Memory confifuration bhosalenarayan 2 August 14th, 05:33 AM
SQL Server Primary Key and a Unique Key katty.jonh 2 July 25th, 10:36 AM











×
We have made updates to our Privacy Policy to reflect the implementation of the General Data Protection Regulation.