Here's the what, without the why. We need to migrate several terabytes of data from existing MBR storage to GPT. We have a variety of environments, some in availability groups some not. We want to minimize, if not eliminate downtime. To this end, we have looked at using a variety of "conventional" methods from brute force OS copy and reattach, to backup and restore to new media, log shipping to a new database and renaming etc. One suggestion did arise, that I hadn't seen before. The use of the DBCC SHRINKFILE command. Has anyone ever tried this? In essence, adding a new file (on the GPT storage) to the existing filegroup(s) and then shrinking the old one using the EMPTYFILE argument. As long as performance isn't brought to its knees, we aren't concerned with how long the process would take, as long as the migration was successful. Other considerations would be how to effect this same change on the AG databases, or do we simply reseed them (some of these DB's are over 1TB in size). I have no issue with the file sizes,. I would size the new file appropriately, given that our maintenance would be correcting any indexing fragmentation that occurs in the move.
We are open to suggestions. Anyone have a better solution? Any non-SQL solutions? Disk partitioning tricks?
This would be the proposed process for the [MoveMe] database. This database resides in a file called "MoveMe_data" on 'Y:\Data\MoveMe_data.mdf'
ALTER DATABASE [MoveMe] ADD FILE ( NAME = N'MoveMe_data2', FILENAME = N'Z:\Data\Moveme_data2.ndf' , SIZE = 307200000KB , FILEGROWTH = 655360KB ) TO FILEGROUP [PRIMARY]
DBCC SHRINKFILE (N'MoveMe_data' , EMPTYFILE)