Featured Database Articles
Query Store is a new feature in SQL Server 2016 which, once enabled, automatically captures a history of queries, execution plans, and runtime statistics, retaining them for your troubleshooting performance problems caused by query plan changes. This new feature greatly simplifies performance troubleshooting by helping you quickly find performance differences, even after server restart or upgrade. Read on to learn more.
One of the challenges associated with running your workloads in Azure SQL Database is the limited level of management oversight. While you can easily scale it vertically by changing the associated service tier and performance level, you do not have the option of running SQL Server Profiler or Index Tuning Wizard, commonly used to evaluate, troubleshoot, and optimize database performance. Fortunately, there is an alternative approach that leverages the functionality incorporated into the recently introduced Azure SQL Database Advisor component of Azure SQL Database. In this article, we will present its basic characteristics.
If you are looking for a way to track the history of all the data changes to a table then SQL Server 2016 has you covered with the new temporal table support. With Temporal tables SQL Server is able to store all the older data records into in a history table while keeping the current records in the original table. Greg Larsen explores using the temporal table feature of SQL Server 2016 to create a history table for an existing SQL Server table.
Oracle’s REMAINDER function can return negative results; read on to see why and what can be done to work around the issue.
Oracle's calibrate_io procedure populates the data dictionary with disk 'performance' data to give the optimizer a fighting chance at a decent plan. Read on to see why the data it generates may not be the most accurate with respect to performance.
There are a lot of 'explanations' offered for the ORA-04091 error; read on to find out what is really going on and how to address the situation.
Despite the sophistication of the latest DB2 software versions and the power of current IBM z/server technology, it is still possible for performance and data availability to deteriorate due to a variety of things, including increased dataset extents, loss of clustering, index page splits, and other factors. This article presents simple SQL statements that the database administrator (DBA) can execute against the DB2 catalog to determine if one or more application databases suffer from common maladies, and what the DBA can do to fix or mitigate potential problems.
What is next for big data? Some experts claim that data "volumes, velocity, variety and veracity" will only increase over time, requiring more data storage, faster machines and more sophisticated analysis tools. However, this is short-sighted, and does not take into account how data degrades over time. Analysis of historical data will always be with us, but generation of the most useful analyses will be done with data we already have. To adapt, most organizations must grow and mature their analytical environments. Here are the steps they must take to prepare for the transition.
Business Intelligence (BI) has matured over the past two decades. The next few years will be critical for the information technology staff, as they attempt to integrate and manage multiple, diverse hardware and software platforms. This article addresses how to meet this need, as users demand greater ability to analyze ever-growing mountains of data, and IT attempts to keep costs down.