DBCC ShrinkFile - better to do lots of little shrinks, or one big shrink?
Is it better to do one big shrink, or lots of little shrinks? The back story. A database was created a while ago and is currently sized at around 200GB; only 120GB is used. We have been advised by the suppliers of the software that's served by this that we need to defrag the hard disks (98% fragged, apparently). But in order to do that, I need to shrink the DB file by around 20GB (it's virtually the only thing on that drive...) So. Better to do it all at once, or in 20 1GB slices? Doing it in smaller chunks allows for a bit more control over server load; however, the server is rarely stressed. My concerns are more around what it's going to do to the data within the database (if you see what I mean). The data within the database is almost entirely unfragmented, which is nice. However, an initial attempt to shrink the file (by just reclaiming unused space) resulted in no change in size, so I assume that the data is currently spread throughout the DB file. I realise there are no hard & fast answers here (so no bonus points for saying "It depends"), but what is the general feeling on this? What would *you* do?
To arrive at that much fragmentation the growth must have been set to 1mb or something (might be worth checking), but if you are really going to do a shrink operation, I'd do one, not a series. A series is more likely to add to your fragmentation problems.
Is it ldf or mdf thats too big? Are you in simple mode? Is it all backed up so that the truncate can succeed? Personally I'd try a 1GB step, if that worked well then go for 2GB then 4GB just to take it easy on the system
When the need arises for me to shrink a datafile (after a huge delete or some user reloaded the same 10 gig dozens of times) I use a loop where I put in my current size, destination size, size to shrink by (500 mb), name of the file, etc and have it loop through. That way I can see the progress as it shrinks. I can provide you a copy of the script when I get back to my office if you would like.
*... (98% fragged apparantly).....* Do you only see the DB side of things and cannot confirm their claims on fragmentation? I couldn't believe a claim like that without proof. If the MDF is 200GB, was it presized to that or has it grown? If you could afford the space and time to do it, create a new filegroup (presized to what you will need + breathing room) and file and move all the objects from FG1 to FG2. You can then empty FG1 and shrink that into oblivion. Otherwise I would do the same thing as @Fatherjack. Something I would leave to off hours though.
How much time do you have to accomplish this and can the dB be taken down... and how much do you trust your backups? A radical approach may be to back up the dB, drop it, delete the MDF, de-fragment the drive, and restore from the back up. This might be overkill for your scenario but you can be assured of maximum de-fragmenting. Secondly, can SQL Server be stopped during de-fragmentation? You're more likely to achieve better results and in less time.