question

csaha2 avatar image
csaha2 asked

Shrink log file

I used the following command to shrink the log file USE [xxx xxx] GO DBCC SHRINKFILE (N'xxx xxx_log', 0,TRUNCATEONLY) GO Log files shrink with no error, however I wanted to build it with the maintenance plan as a scheduled task but have error messages on running the following query USE [xxx] GO DBCC SHRINKFILE (N'xxx_log', 0, TRUNCATEONLY) GO USE [xxxx] GO DBCC SHRINKFILE (N'xxxx_log', 0,TRUNCATEONLY) GO Error Executing the query `DBCC SHRINKFILE (N'xxx_log', 0, TRUNCATEONLY)` failed with the following error: "Could not locate file 'xxx_log' for database 'master' in sys.database_files. The file either does not exist, or was dropped. ". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly,parameters not set correctly, or connection not established correctly. How do I correct this error? The logfile names that I have used are correct.
sql-server-2005transaction-logdbcc
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Tim avatar image
Tim answered
You should really reconsider your plans to have a maintenance task to shrink your transaction log. Transaction logs grow for very valid reasons. You should consider setting a job to backup your transaction logs on a regular interval. The time depends on your location but somewhere around 15 minutes would be a good place to start. Full backups do not issue a checkpoint in your transaction log so unless you are backing it up on a regular basis they will continue to grow until you are out of space causing you to have to issue the above command to truncate and then shrink.
5 comments
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Are you really trying to shrink the log of your master database? That is what your error is telling you. "Could not locate file 'xxx_log' for database 'master' in sys.database_files." If you are truly trying to shrink the log file for master the database then you need to use the proper name for the log file. 'mastlog'.
1 Like 1 ·
I have a separate job created called shrinklogfile and using the above command it fails with the above error. I am also backing up log files every hour. What am I missing? The log files are deleted ones a day from the server.
0 Likes 0 ·
To answer your question on why you are getting the error is either you are in the wrong database issuing the shrink command, or the log file you have listed is not named correctly. What is the root cause of you needing to shrink the log file? Is it just due to not having enough drive space?
0 Likes 0 ·
All the shrinkfile statements are fine without errors when I use the query window. However when I build that as a T sql statement task I am getting the following error message.
0 Likes 0 ·
I believe this hit the nail on the head. Also like to note, logs can be an enemy when not managed. Drive space is a valuable asset in most cases where databases are concerned.
0 Likes 0 ·
Grant Fritchey avatar image
Grant Fritchey answered
I would take a different approach entirely. Why are you shrinking your logs all over the place? Assuming you have the databases in simple recovery or you have log backups in place, logs generally grow to the size that they need to be in order to support the transactions within them. Yes, there are exceptions where the logs grow out of proportion due to some odd process or rare error, but most of the time your logs are growing to a size because they need to be that big. By shrinking them over & over, you're forcing them to regrow, which will cause blocking while that operation occurs. I would rethink the whole process of automating shrinks.
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Matt Whitfield avatar image
Matt Whitfield answered
I just wanted to add weight to the other guys, but also by adding that not only should you not shrink log files, but that you should size them appropriately to start with - for the simple reason that each time a log file grows, VLFs get added into the log, and having large numbers of VLFs in a log file isn't actually a healthy thing. Can I point you to further reading at SQL Skills: [Transaction Log VLFs - too many or too few?][1], [Inside the Storage Engine: More on the circular nature of the log][2] and [Bug: log file growth broken for multiples of 4GB][3] [1]: http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx [2]: http://www.sqlskills.com/BLOGS/PAUL/post/Inside-the-Storage-Engine-More-on-the-circular-nature-of-the-log.aspx [3]: http://www.sqlskills.com/BLOGS/PAUL/post/Bug-log-file-growth-broken-for-multiples-of-4GB.aspx
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Blackhawk-17 avatar image
Blackhawk-17 answered
Your issue may be that the account the job runs under does not have permissions at the O/S level to the files. That being said... **don't unnecessarily shrink log files.**
6 comments
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

csaha2: Just because a vendor tells you to do something harmful to your database does not mean you have to do it. If it is a production database, leave it in FULL recovery, take t-log backups every 15 minutes. Your problems will be solved and you can sleep at night knowing you can recover your database in case of emergency. I assume you will be the one handing the emergency, and responding to "WHY DID WE LOSE ALL OUR WORK !" from angry users, not the vendor. Is management in agreement that 4 hour data loss for the entire database is acceptable ? They might not think about it now, but during a real problem, they might change their tune. How much time will all the users need identify the last things they did, then re-do all their work ? Full recovery is useful for more than just big disasters. Suppose 1 table gets updated incorrectly. You can restore the database with a new name, apply transaction logs to the point in time, then copy the table back to the prod database. With simple, you can only go back to the full or diff backup, not point in time.
3 Likes 3 ·
Thank you for the expert comments but I have to shrink the log files after the full back up according to the vendor. When I added the Tsql statement after the backup I get the following error: Executed as user: NT AUTHORITY\NETWORK SERVICE. ...00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 11:40:15 AM Progress: 2011-05-11 11:40:16.87 Source: {7B133364-E3FF-4602-ACB2-BA1F3BDBA50D} Executing query "DECLARE @Guid UNIQUEIDENTIFIER EXECUTE msdb..sp".: 100% complete End Progress Progress: 2011-05-11 11:40:17.29 Source: Back Up Database (Full) Executing query "EXECUTE master.dbo.xp_create_subdir N'E:\Backup\Fu".: 100% complete End Progress Progress: 2011-05-11 11:40:17.30 Source: Back Up Database (Full) Executing query "EXECUTE master.dbo.xp_create_subdir N'E:\Backup\Fu".: 100% complete End Progress Progress: 2011-05-11 11:40:58.18 Source: Back Up
0 Likes 0 ·
I would love to hear the reason the vendor gives for this requirement, as it's only effect is to slow the future performance of the system as the log file has to grow again.
0 Likes 0 ·
Ahh... 'the vendor' - committer of many crimes...
0 Likes 0 ·
It would be safe for me to switch to simple mode from full recovery mode. I would be taking full backup ones a day and differential backup every 8hrs and system snapshots every 4 hrs. This should give us only a 4 hrs loss. But my concern is can I switch the recovery mode from full to simple anytime during business hours?
0 Likes 0 ·
Show more comments
KenJ avatar image
KenJ answered
If you really *must* shrink databases, or even just the transaction logs - almost always a terrible idea, as attested to in every answer here - after every full backup, use a maintenance plan shrink database task for it, instead of tacking a custom shrink script onto the process. There is a screen in the maintenance plan wizard to shrink databases, and it doesn't get the syntax wrong. You can even tell it to only shrink user databases (or just the vendor database) ![alt text][1] Unless the vendor is responsible for all the SQL Server performance, availability and support, including backups and shrinking, they should have no involvement with your master database or its log file. [1]: /upfiles/shrink-database-task.png **Long edit in response to comment...** Are you currently using the full recovery model with no transaction log backups? That leads to log file size problems, which may be where the question came from. To decide between full recovery with transaction log backup and simple recovery, you need to know your tolerance for data loss. If you need point-in-time recovery or even 15 minute recovery, you will be well served by using the full recovery model with 15-minute transaction log backups. If you can lose all the data since your last full or differential backup, you will be well served by using the simple recovery model with periodic full backups (you can use differential backups to get a finer recovery granularity without the overhead of running full backups several times per day). One thing about the simple recovery model, is that it automatically recycles the transaction log as portions become inactive. Barring extremely long running transactions (hours or days) this keeps it from growing out of control like it would with full recovery without transaction log backups. Regardless of the approach you take, do continue to monitor, as your actual log file activity will be specific to your environment and workload.
5 comments
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Awesome - I would +5 this if it were possible.
1 Like 1 ·
Well then it look like if log truncation is a terrible idea, there are two option left. I can convert the database to simple mode take fullbackup may be twice a day and not worry about log file because the simple mode will commit to the database. Or I can try to take frequent transactional log backup(every 15 mints) and watch the log file for a while. Please advice. If I switch from full to simple I will not loss any data right.
0 Likes 0 ·
I added this information to the answer.
0 Likes 0 ·
@KenJ, fantastic answer and detailed explanation of recovery models.
0 Likes 0 ·
@csaha2 - the transaction log is a hardening technique to allow SQL Server to get back to the work of processing new requests. Whatever recovery mode you use you will not lose data during normal operations. Where it comes into play is if an emergency arises. As @KenJ said it depends on how much data you can afford to lose if your system crashed and you needed to rebuild from backups. If 24 hours is acceptable then simple and a nightly backup might be your answer. If this system is uber-critical then you need to run under the full recovery model and take transaction log backups as often as possible. What is your tolerance for data loss in the event of an emergency?
0 Likes 0 ·

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.