How to shrink the log file of SQL database at run time

I am using the BCP utility to import a large text file (2GB), but the log file grows to around 70GB, vs. the mdf file at around 10GB.

I need to know whether i can shrink the logfile during this BCP process?

more ▼

asked Nov 09, 2009 at 02:38 AM in Default

suraj gravatar image

161 13 14 14

(comments are locked)
10|1200 characters needed characters left

4 answers: sort voted first

If your database is in "full recover model" you need to perform a transaction log backup to free up the logfile but that will only work if the transaction is completed. If you are running the bcp without the -b switch the whole file is imported in one transaction. To split the transaction into smaller blocks you can specify the switch -b along with a number to set a batch size.

If you are using "simple recover model" you do not need to do any transaction log backup during the load of the file.

Quoute from Books online :

Specifies the number of rows per batch of imported data. Each batch is imported and logged as a separate transaction that imports the whole batch before being committed. By default, all the rows in the data file are imported as one batch. To distribute the rows among multiple batches, specify a batch_size that is smaller than the number of rows in the data file. If the transaction for any batch fails, only insertions from the current batch are rolled back. Batches already imported by committed transactions are unaffected by a later failure

more ▼

answered Nov 09, 2009 at 05:07 AM

Håkan Winther gravatar image

Håkan Winther
16k 35 37 48

I would add that it is possible to do a non-logged bulk insert into a 'staging table', then import the rows in small batches from the staging table to it's eventual destination. This will give you the opportunity to run CHECKPOINTs / backups frequently during the insert process.
Nov 09, 2009 at 08:15 AM Matt Whitfield ♦♦
(comments are locked)
10|1200 characters needed characters left

You could change the recovery model of the database prior to the BCP, to Bulk-Logged or Simple, if you know you are not going to have anything going on that you would need to be able to do a point in time restore. Then after the import change back to FULL and do Full and Log backups.

I'd also add that the growth could also be due to changed in indexes as well. You might also want to drop and re-create the indexes on the affected table(s) during the process.

more ▼

answered Nov 10, 2009 at 12:54 PM

Jack Corbett gravatar image

Jack Corbett
1.1k 2 2 3

I think this advice is a good one. In my opinion, it's better to avoid growing the log. Take care after instead of solve problems later.
Nov 10, 2009 at 01:02 PM Alberto De Rossi
(comments are locked)
10|1200 characters needed characters left

Everything that Hakan says is good. But also, you can run the CHECKPOINT command during the import, which may give the system the chance to truncate the log more often, thereby keeping it smaller. You can put the CHECKPOINT command in a loop so that it runs repeatedly - in a separate connection, of course.

Edited: This will impact a system using the Simple recovery model.

more ▼

answered Nov 09, 2009 at 06:20 AM

Rob Farley gravatar image

Rob Farley
5.7k 16 18 20

According to Paul Randal, the only thing that marks VLF's (when in Full recovery) as inactive/available is a log backup.
Nov 10, 2009 at 12:52 PM Jack Corbett
Yes, but if you're in the Simple recovery model, it's the frequency of checkpoints that makes the most difference. I often see people switch to Simple and find that the log still goes when the system is under load.
Nov 10, 2009 at 08:02 PM Rob Farley
(comments are locked)
10|1200 characters needed characters left

It is unlikely that you'll be able to shrink the log during the import if the import is running as a single transaction. Until the transaction is complete, it needs to be held within the log. If your log is growing that much, you might want to consider breaking down the import into multiple steps so that you can get more frequent commits and therefor smaller transactions.

more ▼

answered Nov 09, 2009 at 11:29 AM

Grant Fritchey gravatar image

Grant Fritchey ♦♦
103k 19 21 74

(comments are locked)
10|1200 characters needed characters left
Your answer
toggle preview:

Up to 2 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

SQL Server Central

Need long-form SQL discussion? SQLserverCentral.com is the place.



asked: Nov 09, 2009 at 02:38 AM

Seen: 2538 times

Last Updated: Nov 09, 2009 at 05:36 AM