I used the following command to shrink the log file USE [xxx xxx] GO DBCC SHRINKFILE (N'xxx xxx_log', 0,TRUNCATEONLY) GO Log files shrink with no error, however I wanted to build it with the maintenance plan as a scheduled task but have error messages on running the following query USE [xxx] GO DBCC SHRINKFILE (N'xxx_log', 0, TRUNCATEONLY) GO USE [xxxx] GO DBCC SHRINKFILE (N'xxxx_log', 0,TRUNCATEONLY) GO Error Executing the query `DBCC SHRINKFILE (N'xxx_log', 0, TRUNCATEONLY)` failed with the following error: "Could not locate file 'xxx_log' for database 'master' in sys.database_files. The file either does not exist, or was dropped. ". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly,parameters not set correctly, or connection not established correctly. How do I correct this error? The logfile names that I have used are correct.
You should really reconsider your plans to have a maintenance task to shrink your transaction log. Transaction logs grow for very valid reasons. You should consider setting a job to backup your transaction logs on a regular interval. The time depends on your location but somewhere around 15 minutes would be a good place to start. Full backups do not issue a checkpoint in your transaction log so unless you are backing it up on a regular basis they will continue to grow until you are out of space causing you to have to issue the above command to truncate and then shrink.
I would take a different approach entirely. Why are you shrinking your logs all over the place? Assuming you have the databases in simple recovery or you have log backups in place, logs generally grow to the size that they need to be in order to support the transactions within them. Yes, there are exceptions where the logs grow out of proportion due to some odd process or rare error, but most of the time your logs are growing to a size because they need to be that big. By shrinking them over & over, you're forcing them to regrow, which will cause blocking while that operation occurs. I would rethink the whole process of automating shrinks.
If you really *must* shrink databases, or even just the transaction logs - almost always a terrible idea, as attested to in every answer here - after every full backup, use a maintenance plan shrink database task for it, instead of tacking a custom shrink script onto the process. There is a screen in the maintenance plan wizard to shrink databases, and it doesn't get the syntax wrong. You can even tell it to only shrink user databases (or just the vendor database) ![alt text] Unless the vendor is responsible for all the SQL Server performance, availability and support, including backups and shrinking, they should have no involvement with your master database or its log file. : /upfiles/shrink-database-task.png **Long edit in response to comment...** Are you currently using the full recovery model with no transaction log backups? That leads to log file size problems, which may be where the question came from. To decide between full recovery with transaction log backup and simple recovery, you need to know your tolerance for data loss. If you need point-in-time recovery or even 15 minute recovery, you will be well served by using the full recovery model with 15-minute transaction log backups. If you can lose all the data since your last full or differential backup, you will be well served by using the simple recovery model with periodic full backups (you can use differential backups to get a finer recovery granularity without the overhead of running full backups several times per day). One thing about the simple recovery model, is that it automatically recycles the transaction log as portions become inactive. Barring extremely long running transactions (hours or days) this keeps it from growing out of control like it would with full recovery without transaction log backups. Regardless of the approach you take, do continue to monitor, as your actual log file activity will be specific to your environment and workload.