As to DBCC, the message was "you don't need to run it". However, it
is very fast, and scans at near disk speed. It does a single scan of the
database checking all objects in parallel (Current DBCC checks each tables one
after another, checks each secondary index next). The 7.0 DBCC will generate its
reports as a result set and will optionally repair some problems. Performance
Throughput vs. 6.5
Single table with non-clustered index
Single table, 1 clustered index
Single table, 1 clustered, multiple non-clustered
95 137 X
Full database (complex customer, 47GB)
Why you don't have to run DBCC: There is a fast fail philosophy; problems are
caught immediately. SQL Server 7 has simpler data structures. They recommend you
run it until you gain confidence in 7.0 and that eventually you will find that
it is not part of regular maintenance.
The proposed things that DBCC will repair (not definite for the 7.0 release)
are Allocation structures and minor BTREE inconsistencies. It may rebuild
damaged indexes, remove damaged BLOBS and records. Its repairs can be rolled
Bulk Data Load (BCP)
There is a new technique for bulk loading that uses OLEDB internally. You can
use constraints and indexes and still have fast loading. A single client loads
twice as fast as 6.5. Parallel loads into a single table scale linearly with the
number of CPUs. Special techniques for BLOBs give faster load rates 5.1 MB/sec
from a single client.
Alerts and Jobs ("The artist formerly known as Tasks"
These remain under the control of the SQL Agent (equivalent to the SQL
Executive of 6.5)
Very little appears to have changed in the alerting mechanism. The presenter
said that there were threshold alerts, but the Books On Line (Beta 2) only refer
to performance monitor threshold alerts which are just the same as they are now.
Perhaps this feature will be added in Beta 3.
The Job "engine" is much enhanced. It permits multiple step jobs
with dependencies. You can Script jobs with VB script. Jobs can be scheduled on
a regular (calendar) basis as they can be in 6.5. They can also be scheduled to
run on CPU idle, on/above/below a threshold. Jobs can be triggered by alerts
just as they can in 6.5
Multi-server operation is built into the SQL Agent. Jobs can be defined
centrally and support remote execution on multiple servers. Status of jobs can
be rolled up to this central server. These jobs are "pulled" to remote
servers. This increases scalability.
For example, one might define jobs on a server called "Queen" that
backup databases on Servers A, B, and C. Servers A, B, and C connect to the
Queen server and pull the jobs down. They execute them on their own schedule and
report the results to the Queen. They do not need to be connected to the Queen
server while running their jobs.
The demo for this section was the new version of SQLMaint. Now it will
include all user db's in the maintenance plan. If you add a database, it
automatically gets included in the plan. It will be possible to log success or
failure of the SQLMaint runs to a table which can be either local or remote.
Data Transformation Services
The presentation on this topic in this session was short. I will describe DTS
in a separate article.
This is a replacement for the SQL trace program in 6.5. It serves as a
monitoring tool and also can gather a "workload" for use by the Index
It captures Open Data Services, Query Processor, Transaction Log and Lock
"events". It will be possible to see "inside" stored
procedures and triggers rather than just seeing the "EXEC MyProc" statement. It uses a general-purpose event model that looks like this:
It is clear that this model is extensible, though I have not yet learned how
to define my own event producer or custom consumer.