I am aware of all recommendation NOT to shrink dadafiles, but this is for testencvironment.
We have a weekly flow that takes a copy-only backup from production (mainly 3 big databases) and moves the backupfiles to our managmentenvironment. We have approx 3 million costumers in our productiondatabase and we have about 50 different testenvironments. For most test and developments we only need a database with approx 200 persons. i have created a "shrink masterdata" script and it works as we want. However the datafile is still 200 GB big but have 195 GB free space. If I shrink this I save up almost 200 GB multitimes our 50 testenvironments. It is a discsaving in 10 TB not to mention how quick restore/development will be. of cource we still need a full databse for performancetests and so on, but not all environents, and definitely not on local developermachines.
So, there was the reason for wanting to shrink the datafiles, hoperfully you understand and help us shrink the right way.
First we just tried to run DBCC Shrinkfile and it worked as we wanted but it took about 15 hours.
Then we have been told its better to create a new filegroup with its own datafile and recreate all indexes there. So we made a script that scriptet out all indexes, dropped them and recreated them on the new filegroup, but we got filled disc since that Database that used to be 80 GB was 950 GB!!!???
ALTER DATABASE [DBName] ADD FILEGROUP [Temporary] ALTER DATABASE [DBName] ADD FILE ( NAME = N'DBName2', FILENAME = N'D:\SQLDATA\\DBName2.ndf' , SIZE = 2097152KB , FILEGROWTH = 524288KB ) TO FILEGROUP [Temporary]
CREATE NONCLUSTERED INDEX [INDEXNAME] ON [DBNAME].[dbo].[TABLENAME](COLUMNNAME ASC) INCLUDE (ID) WITH (PAD_INDEX = OFF, ALLOW_PAGE_LOCKS = ON, ALLOW_ROW_LOCKS = ON, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, SORT_IN_TEMPDB = OFF, FILLFACTOR =80) ON [Temporary];
Any ideas how to progree the best way?
//Pelle