Relational Databases 101

Relational Databases 101 is excerpted from the Addison-Wesley title, Hitchhiker’s Guide to Visual Studio and SQL Server.

Buy this book
Hitchhiker’s Guide to Visual Studio and SQL Server : Best Practice Architectures and Examples
By William R. Vaughn with Peter Blackburn
Published by Addison-Wesley Professional
Series: Microsoft Windows Server System Series
ISBN: 0321243625; Published: 11/8/2006; Copyright 2007; Pages: T; Edition: 7


Many of my readers come from backgrounds that don’t include formal training on the best ways to design and create efficient, business-class relational databases. If you arrive here with Microsoft Access or FoxPro experience, you’re at an advantage—you know that, for the most part, the process of creating a database is hidden from you by the application’s IDE—you just use drag-and-drop or use wizards to build the databases and tables you want. That’s not at all bad, but without an in-depth understanding of how to best create, tune, and protect a relational database, I suspect that the relational “normalcy,” relational integrity constraints, performance, and scalability of the result might not be particularly stellar. And, more importantly, the data might not be particularly secure. By “normalcy,” I mean how well the database conforms to the recognized standards of relational design where database designers attempt to “normalize” databases to at least the third level. If you’re not sure how to do this, or even what this means, have no fear—I’ll explain this later. The SQL gurus with whom I work (like Peter Blackburn, Kimberly Tripp, and a litany of others) are convinced that more problems can be solved by efficient database design than by the cleverest, best-written application front-end.

IMHO – It’s not how fast you ask the question—it’s how long it takes to find the answer that gates performance.

Getting Started with Solid Database Design

The Microsoft Books Online (BOL) documentation seems to fall a bit short in this important subject, so this chapter might be helpful for those who need a more complete understanding of how to create a best-practice relational database. The problem faced by any database designer is knowing what’s going to be stored ahead of time—before the first table is created. That’s always been (and always will be) a problem. As I’ve said before, a customer rarely knows what they want until they don’t get it.

To get started on the right foot, I recommend a good course in relational theory like Extended Relational Analysis. This can do a world of good—but its depth is well beyond the scope of this book. In courses like this, you learn how to ask the right questions for each “entity” you expect to store in the database.

I also think that using a (big) whiteboard to lay out the database with your team (or customer) can help visualize the data. Getting everyone who is going to consume the data is essential. How I design a database for a single-focused project is very different than the way it’s designed for projects where a small multitude of groups expect to consume the data. Admittedly, database development by committee is tough, and one should try to avoid those situations, but leaving town might not be an option.

Before we get into the academics of normalization, let’s spend a few moments in quiet contemplation and focus on a few guiding principles. As you design your database, you should keep these basic tenants in mind:

  • Each table needs a unique identifier. That is, you need to choose one or more columns to permit SQL Server to find specific rows to return or update without including other irrelevant rows. In SQL Server, this typically means each table should have an “ID” column (typically cast as an Identity integer) that gives the row a unique (SQL Server–generated) value. You should define this column using the Primary Key (PK) constraint (as discussed in Chapter 2, “How Does SQL Server Work?”). For example Au_ID is the unique identifier for the Authors table in the sample Biblio database.
  • Each table should store only one kind of information and not repeat information in multiple columns. As I discuss next, this is where normalization comes in. SQL Server is tuned to work with small, tight rows that contain relatively few columns. If you find your table has more than a dozen columns, you’re treading off the boards and into the swamp. Remember, the largest row you can define is only 8K (not counting BLOB columns).
  • Microsoft feels that you should avoid columns that can be set to NULL—I’m not so sure. That is, they feel that you should avoid columns where you might not have access to the data at all and cannot (logically) assign an arbitrary default. Each time you define a column as permitting NULL (making it “nullable”), SQL Server incurs extra overhead. It makes sense to move these columns to another table and cross-reference them to the table’s PK as long as the database complexity does not get out of hand.
  • Each column needs to be defined both with the content in mind and with the constraints needed to keep it pure. It’s not enough to type a column as integer and hope that the data entered therein is going to be pure. All columns, especially numeric columns, need to have (at least) CHECK constraints defined, if not TSQL rules. Columns should be defined to hold what’s expected to be saved—and no more. Needlessly bloating data type capacity simply chews up memory to no good purpose. If you have columns that contain text, but that text is never expressed in Unicode, don’t use a Unicode type. If you have a date column but don’t need to store time with 3.33-millisecond accuracy, use smalldatetime instead of datetime. You get the idea. In this case, less is more—more space saved, more memory available to store other pertinent stuff. I’ll show you how much each column costs in memory near the end of this chapter.

IMHO – Understand that all data is evil until proven innocent—it’s not in the U.S. Constitution, but given the state of the current Congress, it just might get there.

  • Start thinking about a concurrency strategy from the beginning. Determine how data is to be shared (if at all). Consider that many “single-user” databases are doomed to failure once they are “converted” to multi-user. Think about the mechanism you’re planning to use to determine if a row has changed once it’s fetched. For example, you might (perhaps ought) to use a “Rowversion” or “TimeStamp” column to help track access—it can make Visual Studio’s job a lot easier (and yours, too) when it comes time to write action commands to change the database.
  • Avoid BLOBs in the database. I have been suggesting this for over a decade, and those who have listened have been able to build a smaller, faster, and simpler database. If you have BLOBs, store them in files (or on RO media) and use the database to store the path.
  • Finally, and perhaps most importantly, for less-experienced developers, I think that you should strive to keep your database simple—simple to understand, support, and maintain. Excessive complexity is the bane of many a mature and amateur database designer.

Understanding Relational Database Normalization

In a nutshell, building a “good” optimized, relational database is mostly about normalization. Once you understand the basic principles of normalization, SQL Server should be able to manage your data more efficiently, the applications you write should be able to return data more efficiently, and you’ll find it a lot easier to protect your database’s data and relational integrity.

So, what is “normalization” and how does it help performance and all that other good stuff? Well, normalization is simply the set of relational database techniques developed to efficiently organize the information you want to manage in a relational database. The academics talk about (at least) five “normal forms,” but most database designers stick to the first three forms and seldom go further. The benefit of implementing further levels is usually not that great when compared to the costs—especially in smaller databases.

Here are basic tenants of the first three normal forms.

  1. First normal: Don’t define duplicate columns in the same table. Each column in a table should contain “different” information. This does not mean avoiding use of identical column names (that’s prohibited by the SQL engine), but it does mean that any two columns should not store basically the same information. For example, don’t create a table with two or more addresses for a customer (such as a home and business address), as shown in Figure 3.1. The solution to this problem is best implemented by the second normal form.

Figure 3.1
Unnormalized Customers table.

  1. Second normal: All attributes (columns) in a table that are not dependent on the primary key must be eliminated. This means you need to create a separate table for each logical group of data and identify each row with a unique set of columns (its own primary key). In this case, create a separate Addresses table and connect the two tables together with their primary keys, as shown in Figure 3.2.

Figure 3.2
Normalized Customers and Addresses tables.

  1. Third normal: Tables cannot include duplicate information. For example, if two tables require a common field, a separate table should be created to manage that column. Our basic design already conforms to the third normal form. However, as an example, take a look at the Biblio database. In this case, I created a TitleAuthor table that contains fields common to the Titles and Authors table.

No, this is not an in-depth discussion of normalization or relational theory, but it’s enough to get you started. It’s also important to know that many database developers bend these rules from time to time to get more efficiency out of their databases. Sometimes, they add a bit more detail for an entity in a parent table, so it’s not always necessary to JOIN to another table just to get one or two bits of information. Yes, these changes mean that the data must be kept current in two different tables, and if someone else comes along and does not realize what’s going on….

Understand as well that stored procedures or object-based approaches can (and do) help resolve these issues. By blocking direct access to base tables, developers can write server-side code to dereference the data in the base tables and get away with some tactics that would cause quite a bit of trouble if direct table access were permitted.

Once you have decided what tables you need, you need to use one of the SQL Server or Visual Studio tools to create them (as I illustrate in Chapter 4, “Getting Started with Visual Studio”). But before doing that, I often draw these tables on a whiteboard, which makes it easier to “see” how the data is to be stored and how the tables are related. In Visual Studio, you can use the database diagramming tool to help at this phase, and the ink does not stain your fingers as much.

Latest Articles