DBCC ShrinkFile - better to do lots of little shrinks, or one big shrink?

Is it better to do one big shrink, or lots of little shrinks?

The back story. A database was created a while ago and is currently sized at around 200GB; only 120GB is used. We have been advised by the suppliers of the software that's served by this that we need to defrag the hard disks (98% fragged, apparently). But in order to do that, I need to shrink the DB file by around 20GB (it's virtually the only thing on that drive...)

So. Better to do it all at once, or in 20 1GB slices?

Doing it in smaller chunks allows for a bit more control over server load; however, the server is rarely stressed. My concerns are more around what it's going to do to the data within the database (if you see what I mean).

The data within the database is almost entirely unfragmented, which is nice. However, an initial attempt to shrink the file (by just reclaiming unused space) resulted in no change in size, so I assume that the data is currently spread throughout the DB file.

I realise there are no hard & fast answers here (so no bonus points for saying "It depends"), but what is the general feeling on this? What would you do?

more ▼

asked Feb 22, 2011 at 03:55 AM in Default

avatar image

ThomasRushton ♦♦
42.3k 20 57 53

"Need to shrink the DB file..." - because we have a short list of software we're allowed to use to do this stuff, and it's refusing to attempt the job without 10% free disk space.

Feb 22, 2011 at 04:05 AM ThomasRushton ♦♦

Sorry. It's the MDF that's oversized.

Feb 22, 2011 at 04:06 AM ThomasRushton ♦♦

Whichever methodology you choose... re-size the file at the end to prevent this possibility next time :)

Feb 22, 2011 at 06:19 AM Blackhawk-17

@Blackhawk-17 assuming, that is, that re-sizing the file back up to its original size doesn't introduce more fragmentation... ;-)

Feb 22, 2011 at 08:43 AM ThomasRushton ♦♦
(comments are locked)
10|1200 characters needed characters left

5 answers: sort voted first

To arrive at that much fragmentation the growth must have been set to 1mb or something (might be worth checking), but if you are really going to do a shrink operation, I'd do one, not a series. A series is more likely to add to your fragmentation problems.

more ▼

answered Feb 22, 2011 at 04:30 AM

avatar image

Grant Fritchey ♦♦
137k 20 46 81

That is good to know that a series could add to fragmentation.

Feb 22, 2011 at 08:34 AM Tim

@Grant - I assume you're talking about the fragmentation of the data within the database, rather than the files... As @Trad says, good to know.

Feb 22, 2011 at 08:40 AM ThomasRushton ♦♦
(comments are locked)
10|1200 characters needed characters left

When the need arises for me to shrink a datafile (after a huge delete or some user reloaded the same 10 gig dozens of times) I use a loop where I put in my current size, destination size, size to shrink by (500 mb), name of the file, etc and have it loop through. That way I can see the progress as it shrinks. I can provide you a copy of the script when I get back to my office if you would like.

more ▼

answered Feb 22, 2011 at 04:20 AM

avatar image

40.9k 39 94 168

(comments are locked)
10|1200 characters needed characters left

... (98% fragged apparantly).....

Do you only see the DB side of things and cannot confirm their claims on fragmentation?

I couldn't believe a claim like that without proof. If the MDF is 200GB, was it presized to that or has it grown?

If you could afford the space and time to do it, create a new filegroup (presized to what you will need + breathing room) and file and move all the objects from FG1 to FG2. You can then empty FG1 and shrink that into oblivion.

Otherwise I would do the same thing as @Fatherjack.

Something I would leave to off hours though.

more ▼

answered Feb 22, 2011 at 04:26 AM

avatar image

26.2k 18 37 48

I confirmed their claims by looking over the Infrastructure guy's shoulder...

MDF grown over the years. Not that I've got any proof of that, but the size & growth factor is a bit too weird for it not to have been "by design".

Space? What's that then? ;-)

Feb 22, 2011 at 05:23 AM ThomasRushton ♦♦

Have you looked at offloading some data while you're at it? old log tables or some such that are filling up but no longer needed? think of the backup space savings.....

Feb 22, 2011 at 05:31 AM WilliamD

That's a discussion for another day. I think next Tuesday, to be specific.

Feb 22, 2011 at 05:33 AM ThomasRushton ♦♦

@WilliamD - Good idea with the FG solution.
@ThomasRushton - Can you set up a secondary FG and move a subset of the data to ease your space? Perhaps an active table or two?
I would go with a single shrink personally.

Feb 22, 2011 at 06:10 AM Blackhawk-17
(comments are locked)
10|1200 characters needed characters left

Is it ldf or mdf thats too big? Are you in simple mode? Is it all backed up so that the truncate can succeed?

Personally I'd try a 1GB step, if that worked well then go for 2GB then 4GB just to take it easy on the system

more ▼

answered Feb 22, 2011 at 04:04 AM

avatar image

Fatherjack ♦♦
43.8k 79 101 118

Also, just to make sure Paul Randall never sleeps again you could write a cursor to do it all :D

Feb 22, 2011 at 04:05 AM Fatherjack ♦♦

Is Paul likely to read this? Can we summon him, just to make sure? ;-)

Feb 22, 2011 at 04:06 AM ThomasRushton ♦♦
(comments are locked)
10|1200 characters needed characters left

How much time do you have to accomplish this and can the dB be taken down... and how much do you trust your backups?

A radical approach may be to back up the dB, drop it, delete the MDF, de-fragment the drive, and restore from the back up.

This might be overkill for your scenario but you can be assured of maximum de-fragmenting.

Secondly, can SQL Server be stopped during de-fragmentation? You're more likely to achieve better results and in less time.

more ▼

answered Feb 22, 2011 at 06:16 AM

avatar image

12.1k 30 36 42

24/7 environment - downtime? Hah! Chance would be a fine thing... Nice idea, though.

Feb 22, 2011 at 08:41 AM ThomasRushton ♦♦
(comments are locked)
10|1200 characters needed characters left
Your answer
toggle preview:

Up to 2 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

SQL Server Central

Need long-form SQL discussion? SQLserverCentral.com is the place.



asked: Feb 22, 2011 at 03:55 AM

Seen: 3148 times

Last Updated: Feb 22, 2011 at 03:55 AM

Copyright 2018 Redgate Software. Privacy Policy