Recently I posted an article here on Swynk about how to
move all of your data
from one server to another. In that article I outlined a "trick" you
can use if the new server will have the same logical drive configuration as the
old one. If the logical drive configuration is different or you only want to
move a portion of your files, usually the easiest way is to detach them from the
original server and then attach to the new one, which gives you the chance to
change the physical locations of the files for each database. The downside to
that is that if you have a lot of databases to move, it gets very tedious
If you haven’t used them before, SQL gives you three stored procedures to use
for this process; sp_detach_db, sp_attach_db, an sp_attach_single_file_db.
Running sp_detach_db will remove the database from SQL, but leaves the files
intact. You can then move them to a different location on the server or even to
a different server, then use sp_attach_db to "reconnect" the database
and you’re back in business. Detaching the database is also useful when you want
to archive a database but may need to use it again at some time in the future.
When you detach the database, you have the option to update the statistics
before doing so. You can always update the stats after you have reattached it.
You use sp_attach_single_file_db if you only have a data file, it will create
the log file for you. If you have the log file, then you use sp_attach_db.
SP_attach_db supports up to 16 files (a combination of data files and log
files). If you have more than 16 files, you have to use create database with the
for attach option.
When I read in BOL the 16 file limit, I was definitely curious – why would
there be a limit on the number of files? It turns out that sp_attach_db has been
coded to only accept 16 file names, and it basically just creates dynamic sql to
execute a create database with the for attach option! I also looked at
sp_attach_single_file_db, it just does a simple create database with attach,
letting SQL create the log file in the default location. SP_detach_db isn’t as
helpful, it’s just a wrapper around DBCC DetachDB – but you can guess that it
probably just removes the row from sysdatabases plus other related master
Detaching and attaching databases in SQL 7 requires you to run the stored
procedures, but in SQL 2000 you can right click to perform those operations –
very very handy, since you have to provide the complete path and filename for
each file. Unfortunately neither the stored procedures or Enterprise Manager
allow you to specify filegroups when you attach. If you have file groups, you’re
stuck with manually creating the SQL to do the create database.
One potential problem with moving a database to another server is that your
SQL logins will not work correctly. Neil Boyle has addressed that problem, see
his article Fixing
Broken Logins if you
run into that problem.
So, in the worst case scenario, you’re using filegroups and you’re moving the
database to a server with a different logical drive configuration (or you want
to change how you’re using the drives you have) – what’s an easy way to do it?
One way that wouldn’t be too hard would be to script out the create database
statement, then modify the file locations and add the ‘for attach’ to it. In SQL
2000 you could use the Copy Database Wizard (which also avoids the SQL login
Other than that – code? As I’ve demonstrated in other
articles, DMO gives you
the ability to generate scripts programmatically with the same options you get
when you script using Enterprise Manager. This would be faster than manually
scripting each database, but would still require you to customize it afterwards
(adding ‘for attach’ as a minimum, possibly changing file locations).
I couldn’t think of a way short of a copy of the Copy Database Wizard to handle
the file locations, so maybe I could at least get the ‘for attach’ added.
So, there are two options. The first would be to use DMO to generate the
create database script, then open it using the FileSystemObject and append the
‘for attach’. The other would be to control the script process in code, walking
through the collections to get the information needed. I opted for plan B
because it gives me a chance to show you how the filegroups, dbfiles, and
transactionlog collections work, but it is a LOT of code compared to plan A!