I have a databases that are regularly (monthly) updated. An update includes a lot of new or changed T-SQP procedures, functions, indexes creation, etc. A `database X` has transaction log file with the following settings: - unlimited growth - initial size 128 MB - growth with 128 MB with space used - maximum 5-10% of it (a backup of the transaction log file is made every 15 minutes). So, the above settings are specific for each database but in common, the log file size is enough and not growing. After `UPDATE` is made, the size of the transaction log file is increased with 1 or 2 or 3 GB. Since I do not need such big file for regular use of the database, I am shrinking them back like this: DBCC SHRINKFILE(2,TRUNCATEONLY) ALTER DATABASE X MODIFY FILE (NAME = N'x_Log', SIZE = 128MB) I want to execute the code above after each update, but sometimes I need to execute the command above many times in order to reduce the database tlog file (each time I have to wait 15 minutes for the tlog backup to complete). The database are in `FULL recovery` mode with `Always ON`, end the tlog backups are made on the secondary databases. Could anyone tell if the idea for regular shrink is bad and why sometimes I need to executed the shrink several times?
The TRUNCATEONLY option removes the free space at the end of the logfile. So you're not actually squeezing your logfile. If you've made a logbackup, run a checkpoint and try this statement again. So far for the explanation, however ... If you have this update each month, why not leave the logfile to this size. Shrinking / expanding the logfile is a very expensive operation. And with this log increment you'll probably have a lot of virtual logfile (VLF) fragmentation. I suggest: 1. after a logbackup, shink your logfile to the smallest size you can 2. create an initial size for your logfile, large enough for your update actions (probably 3GB) This results in a logfile with minimal VLF fragmentation