In this second article about
Oracle on Windows, we will cover several more areas of interest shared by
Oracle and Windows. Using Oracle on Windows is, in many ways, much simpler than
using a UNIX variant as the underlying operating system. That point alone
provides enough justification to learn more about Oracle on Windows if you are:
New to Oracle and want to learn
more about it in an easy-to-configure-and-use type of environment (your home PC
and a free copy of the Oracle RDBMS software downloaded from Oracle Technology
Experienced with Oracle and want
to experiment with more advanced features, commands, and scenarios without
putting live or actual data (and your job) at risk
Anywhere in between new and
experienced, and wanting to get more familiar with concepts tested on
certification exams, or
Using SQL Server and want to see
what a real, grown-up database looks like (for the SQL Server fanatic who
emailed me after the last Oracle on Windows article and said “anyone who uses
Oracle on Windows is incompetent and irresponsible,” yes, this one is for you).
As in the previous article,
this one will also highlight several concepts tested on the Microsoft Certified
Professional exam for Windows 2000 Professional. These are what I call the
deadly “D” concepts – because not understanding them is potentially deadly to
your database. Well, maybe not exactly deadly, but close to it. The “D”
concepts are disk (de)fragmentation, disk compression, and disk partitioning.
Practically all disks experience
fragmentation as data is transferred on and off a disk. It is tempting – and
easy – to use the Disk Defragmenter Windows utility or Norton SystemWorks Speed
Disk utility (which is commonly found in bundled software packages on new
computers) to speed up what appears to be a slow disk. Disk optimization,
whether it is through placement of key files or defragmentation, is a good
thing. However, do that before you create a database or while the database is
shutdown, and AFTER you have taken a backup. An enormous amount of disk I/O
takes place during a defrag procedure, so why take the chance that one of your
datafiles is involved in a little defrag hiccup? The instructions that come
with Speed Disk, for example, tell you to close all programs and make a backup
of your data before running the utility. These utilities are very reliable, but
it is prudent to take precautions ahead of time, especially when they are easy
You can compress data and
files on an NTFS drive or partition to save space. However, given how
inexpensive 80GB disk drives are ($170 at a big-name office supply store,
having spent all of ten seconds on a Google search to see what today’s price is,
and surely cheaper if you look elsewhere or wait for a sale), why put your
datafiles through the extra labor of compressing and uncompressing during reads
and writes? There is no real or practical need to use disk compression, and
besides, datafiles do not really support compression to begin with. You can
compress script files and other non-database types of files if you need to, but
again, it is really not necessary.
Disk compression and disk
encryption are mutually exclusive events on an NTFS partition (you can count on
questions about this concept on the Microsoft exams). NTFS permissions provide
a good enough layer of security for your datafiles, just as “rwx”
user-group-other permissions do in UNIX. So do not fall into the trap of
thinking that since you cannot use disk compression, you will jump on disk
encryption just because you can. It is pointless to use either feature.
Without getting into RAID, BAARF,
and types of disks, the point to be understood here has to do with what happens
during copying and moving on and across NTFS partitions. Why is this important?
Two quick reasons why. First, if you are planning to take the MCP exam, you
should understand this concept forwards and backwards. You are certain to see
questions about what happens when you move this file here and that file there.
Second, if you are using Explorer windows to drag database files from one
location to another (like during a cold backup or recovery scenario), you do not
want to be wasting time moving a file when you meant to be copying it because
you dragged it to a directory on the same partition.
If you are trying to recover
a lost control file, for example, and you drag a known good one from location A
on D:directory1 to location B on D:directory2, you may have successfully
restored the control file for location B while successfully creating the same
problem all over again because the file in location A is now gone (it was moved
because you stayed on the same partition).
One rule of thumb is to use
more disks instead of bigger files (and there are arguments the other way, as
well). The problem you can run into here is having too many copies of a file if
you do not understand what happens to a file when you drag it across partitions
(it gets copied and the original is left in place). In either case, the potential
is there to misplace or orphan a file.
It is bad enough on UNIX to
lose a file (that’s “lost” – as in you don’t know where it is on the file
system). If you are sure of the name of the file, you can do a “find . -name file_name”
and hope you were on the correct partition (or high up enough in the directory
structure) to begin with. Another trick you can use, which is pretty useful for
documenting installed program files, is to run “ls -laR > files” and
recursively list all files from your current directory on down, and pipe the
output to a file named “files.” Then you can view the “files” file and do a
search for your missing file. However, this is supposed to be about Windows.
What can you do on Windows to find a missing file?
You can run a search for the
file using its name and whatever wildcards you need to identify it sufficiently.
If you use an advanced search (using Windows XP as an example), and you know
the size of the file is larger than 1MB, you can search based on size
(date-based searches are also available). You can view directories and have
them sorted by name, date modified, type of file and size. So, all is not lost,
so to speak, when you lose a file. You can save yourself the time and headache
by understanding how moving and copying works on NTFS (which stands for new
technology file system, supposedly) partitions.
Oracle on Windows is easy to
use, and even easier to use if you have a good understanding of how the Windows
operating system works. With Oracle’s new pricing scheme aimed at enticing
small businesses to make the switch from the
almost-good-enough-most-of-the-time “what do you want to be when you grow up”
SQL Server arena, you almost cannot afford not to know more about the Oracle
and Windows combination.