question

gotqn avatar image
gotqn asked

Fix transaction log file after database update

I have a databases that are regularly (monthly) updated. An update includes a lot of new or changed T-SQP procedures, functions, indexes creation, etc. A `database X` has transaction log file with the following settings: - unlimited growth - initial size 128 MB - growth with 128 MB with space used - maximum 5-10% of it (a backup of the transaction log file is made every 15 minutes). So, the above settings are specific for each database but in common, the log file size is enough and not growing. After `UPDATE` is made, the size of the transaction log file is increased with 1 or 2 or 3 GB. Since I do not need such big file for regular use of the database, I am shrinking them back like this: DBCC SHRINKFILE(2,TRUNCATEONLY) ALTER DATABASE X MODIFY FILE (NAME = N'x_Log', SIZE = 128MB) I want to execute the code above after each update, but sometimes I need to execute the command above many times in order to reduce the database tlog file (each time I have to wait 15 minutes for the tlog backup to complete). The database are in `FULL recovery` mode with `Always ON`, end the tlog backups are made on the secondary databases. Could anyone tell if the idea for regular shrink is bad and why sometimes I need to executed the shrink several times?
sql-server-2012transaction-logshrink-log-filedatabase growth
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

1 Answer

· Write an Answer
Wilfred van Dijk avatar image
Wilfred van Dijk answered
The TRUNCATEONLY option removes the free space at the end of the logfile. So you're not actually squeezing your logfile. If you've made a logbackup, run a checkpoint and try this statement again. So far for the explanation, however ... If you have this update each month, why not leave the logfile to this size. Shrinking / expanding the logfile is a very expensive operation. And with this log increment you'll probably have a lot of virtual logfile (VLF) fragmentation. I suggest: 1. after a logbackup, shink your logfile to the smallest size you can 2. create an initial size for your logfile, large enough for your update actions (probably 3GB) This results in a logfile with minimal VLF fragmentation
3 comments
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

I'm with Wilfred on this. If your logs are constantly growing, then you need that much log space, at least some of the time. Are they empty other parts of the time? Yes. But you're negatively impacting the entire system requiring the logs to grow over and over and over, every month. Plus, as Wilfred correctly pointed out, large numbers of VLFs will seriously negatively impact performance, in a major way. Disks are cheap. Pay that cost up front, set the logs to the right size and leave them there.
3 Likes 3 ·
Because having more than 100 databases with tlog (let's say 3GB) are 300 GB spaces. And when 99% of time only 0.5% will be used from this 1 GB does not look optimal for me. By `shink your logfile to the smallest size you can` do you mean `SHRINKFILE(2,0)`?
0 Likes 0 ·
Thank you both for the answers.
0 Likes 0 ·

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.