question

PerWidell avatar image
PerWidell asked

Best way to shrink datafile

I am aware of all recommendation NOT to shrink dadafiles, but this is for testencvironment.

We have a weekly flow that takes a copy-only backup from production (mainly 3 big databases) and moves the backupfiles to our managmentenvironment. We have approx 3 million costumers in our productiondatabase and we have about 50 different testenvironments. For most test and developments we only need a database with approx 200 persons. i have created a "shrink masterdata" script and it works as we want. However the datafile is still 200 GB big but have 195 GB free space. If I shrink this I save up almost 200 GB multitimes our 50 testenvironments. It is a discsaving in 10 TB not to mention how quick restore/development will be. of cource we still need a full databse for performancetests and so on, but not all environents, and definitely not on local developermachines.

So, there was the reason for wanting to shrink the datafiles, hoperfully you understand and help us shrink the right way.

First we just tried to run DBCC Shrinkfile and it worked as we wanted but it took about 15 hours.
Then we have been told its better to create a new filegroup with its own datafile and recreate all indexes there. So we made a script that scriptet out all indexes, dropped them and recreated them on the new filegroup, but we got filled disc since that Database that used to be 80 GB was 950 GB!!!???

ALTER DATABASE [DBName] ADD FILEGROUP [Temporary] ALTER DATABASE [DBName] ADD FILE ( NAME = N'DBName2', FILENAME = N'D:\SQLDATA\\DBName2.ndf' , SIZE = 2097152KB , FILEGROWTH = 524288KB ) TO FILEGROUP [Temporary]

CREATE NONCLUSTERED INDEX [INDEXNAME] ON [DBNAME].[dbo].[TABLENAME](COLUMNNAME ASC) INCLUDE (ID) WITH (PAD_INDEX = OFF, ALLOW_PAGE_LOCKS = ON, ALLOW_ROW_LOCKS = ON, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, SORT_IN_TEMPDB = OFF, FILLFACTOR =80) ON [Temporary];

Any ideas how to progree the best way?

//Pelle

shrink-database
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

1 Answer

· Write an Answer
Kev Riley avatar image
Kev Riley answered

Shrinking a data file requires pages from the end of the file to be moved to earlier in the file, so that unused pages at the end of the file can be removed. This is a single threaded operation and what causes most shrink operations to be 'slow'. Rebuilding indexes (on Enterprise Edition) can be parallelized, and will try to make use of free space earlier in the file so if you could work out which indexes are at the 'end', it may be faster to manually rebuild those and then run the shrink operation.

This blog post by Paul Randall features a script sp_AllocationMetadata, which may help you find those indexes, however it is only an estimate as it finds the first page that's not a guarantee of where the rest of the pages are!

10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.