Hi, I have a database that resided on a Windows Server 2003 box running SQL 2005 Server and was 15 GB for the longest time (about 12 years). I did a few things, including an upgrade to the application that uses the DB, changed the SQL compatability level from 80 to 90, changed recovery mode from full to simple. Now my database is growing uncontrollably. My mdf file is growing at a rate of 15gb a week and is now up to 233gb. I have no idea what to do. I have run a dbcc checkdb on it and it comes back with no errors. "CHECKDB found 0 allocation errors and 0 consistency errors in database." Is there another way to tell if something is corrupt? I have checked the logs and find everything clean. TO make things worse, i'm now on a new Windows 2008 R2 Server running SQL 2008 R2 x64. I do have a backup from my old SQL 2005 server but it was from when it had already reached 65GB in size and I do not have the original when it was small. I'm worried i'm screwed. Any suggestions on what to check would be appreciated. I did check with the vendor and they do say my configuration is supported and are not willing to help. Thank you
asked Nov 08 '11 at 12:00 AM in Default
The changes you made shouldn't cause any sort of uncontrollable growth, but each needs to be considered.
Additionally, I suggest you take a look at the autogrowth settings of your DB data file(s) and log file(s). If they are on their default settings, you will have 10% growth set. This can cause growth that feels exponential. Change this to a sensible growth value.
You mention you are on 2008R2 but not the edition. Is it Standard of Enterprise Edition?
answered Nov 08 '11 at 12:10 AM
Autogrowth won't actually trigger unless it needs to.
It sounds to me as though it's the application itself that's the most likely culprit. 15GB a week? I've seen similar behaviour when some logging functionality was enabled in an application - this caused a massive increase in data size.
What is the growth pattern? Is it exponential, or is it linear but fast? Does it match with the caseload / transaction load / business load being put through the server?
Something that might help with troubleshooting is to take periodic (say, weekly) snapshots of the following information:
Doing some analysis on that would almost certainly help identify the culprit(s). If you've got massive growth in specific tables, then you might find it useful to run a trace against code that goes against that table.
If you find a table has lots of free space and isn't used, then check to see if there's a clustered index. If there isn't, can you make one?
answered Nov 08 '11 at 02:30 AM
The most likely issue is a change in what is stored.
Determine your largest tables and see what sort of information they contain.
Here is a quick query for that:
A quick guess from unfortunate experience would be XML files that may have previously been stored on the filesystem are now being stored in the dB.
answered Nov 08 '11 at 05:08 AM
You can run given below query and find which table have huge size -
You can check one of the built-in reports by right-clicking on the dB in SSMS and selecting the Disk Usage one. There is a section titled Data/Log Files Autogrow/Autoshrink Events. Expand that and you will get dates and times and size deltas.
This won't tell you why but at least you can see if there is a pattern of when the growths appear.
It is also conceivable that there used to exist a process that purged stale data that was not migrated. Again, see the application owner or vendor for that.