I developed a mechanism which executes a logbackup if the size is above a treshold. I use DBCC SQLPERF(LOGSPACE) to calculate the size of being used. After a logbackup I assume the perc_used is reduced (at least for the committed part). However, this is not the case for some (sharepoint) databases and there are NO active transactions. As a result, I get a lot of very small logbackups.
1) Does somebody have an explanation for this? 2) Is there a way to calculate the committed size of a logbackup?
In general, you should be doing log backups on a regular basis aimed at recovery of your data, not at maintaing the size of the log file itself. If you don't need point in time recovery, set the database to simple and then you don't have to worry about the log backups any more.
As to the issue, the way the log gets written, the last entries could be at the very end of the file. So even though you do a backup, you've only removed the earlier entries which are earlier in the file. It won't move those from the end. If you were to issue another checkpoint and then do a backup, likely you'll get to shrink the file.
By the way, shrinking the file over and over again creates massive fragmentation. Also, assuming you've got auto-grow turned on for this log, you're probably looking at ever increasing numbers of VLFs. I'd check that. Michele Ufford has a script for looking at it.
answered Feb 06, 2012 at 11:09 AM
Grant Fritchey ♦♦