First let me say that this is not a debate of whether to have autogrowth turned on or off. For this post we are going with it is turned on. So for those of you who use the autogrowth on option as a fail safe, what size do you set your databases to auto grow by in megabytes. I have recently started getting control of my SQL environment and monitoring my amount of free space in my data and log files so that they are not having to auto grow. In the event that one does have to grow, what have you found is the best recommended file size that produces the least amount of impact to your systems. I know that while the file is having to expand that I am taking a performance hit. I have read a few places that have stated 100 MB is good. I have seen recommendations in other spots for 250 MB. We all know that 10% is bad, but what is that good rule of thumb number. I figure instead of reading different blogs and searching across bing and google I would just ask my SSC community what you are doing. If anyone reading this will be at the Summit next week hit me up on twitter @tradney
To add to Magnus answer, it also depends on whether the file is a log file or a data file and on whether you are taking advantage of instant file initialization. A data file can be instantly initialized, so you can really get away with insanely large auto grows when taking advantage of this feature. All tlog files, and data files without instant initialization must be zeroed out when grown so care must be taken with their auto grow settings, per Magnus guidance. Between data and log files, you can still typically get away with larger growth increments on data files because pages can be allocated and data written to pages directly in memory while the files are expanded so transaction can continue to complete. With the transaction log, all transactions must wait while the log is grown because a transaction cannot commit until it has been written to disk.
Surprisingly, it depends :) It depends on the size of the current data and the expected size of tomorrow's data. It depends on your storage capacity in terms of speed and your storage capacity in terms of size. Only you know your data. I've had databases which are 100GB but with Autogrow parameter set to 100MB because I don't really expect them to grow. I have also had databases sized at 1GB or less, with 1GB growth, because i DO expect them to grow. I have databases placed on really slow disks (because they are used for archiving only) and other databases placed on really fast disks. The quicker disk, the faster auto-growth, and therefore bigger autogrow parameter can be set. As far as I can say, there's really not ONE rule of thumb when it comes to autogrow-parameters. I would too leave it turned on, but I'd keep an eye on the databases and grow them manually when needed. Autogrow should of course be kept on a level that the online transactions can handle, but not too small. Growth created by maintenance jobs or manual operations could be much bigger, since they won't be caused by a transaction, which has to wait for auto growth to finish.
Can you explain why 10% is bad? Surely the relevance of the % increase is that it takes into account the growth of data accunulating in a database? If my database starts at 10GB of data and I expect 1GB per year to be added to it then I could decide that the server will last for 5 years and therefore size the database to 15GB (if I want to be close to the limit) or 20GB (if I want to have some breathing room to allow for data-growth growth. If I am short of space or for whatever reason I cant make the database 20GB at day one then I could set the auto-growth to 10% and this would add 1GB (essentially room for a years worth of data). Year 2, 3, 4 and 5 will get 1.1GB, 1.21GB, 1.33GB, 1.46GB etc etc this actually takes the original 1GB per year and also makes allowance towards any increase in annual growth. Setting it to a fixed 1GB value would have a similar effect but ends up a little tighter on space as the years go by. I know a lot of people get up tight about auto settings but I can see their uses/benefits and so long as the values are something matching what is required for reasonable growth (annual values) then they dont hurt in the majority of cases
T-Logs first. General advice seems to be to work out what you're likely to need in dev, and then put that into production. For VLDBs, start off with 8GB T-Log, and set autogrowth to 8GB. And try to pre-empt autogrowth, because of the performance impact as the newly extended TLog has to be zero-filled. DB files - "it depends". Where possible, set Instant File Initialization (IFI). Again, try to manage the growth so that it can happen out of production hours (less of an issue with IFI). Some DBAs recommend sizing your database initially so that it is big enough to take two years-worth of growth. Nice idea if you can! Oh, and whatever you do - don't forget to keep an eye on TempDB. If this can't grow and it needs to, you're in a whole heap of trouble. This should be sized on start-up to start out at the right size.