I am using the BCP utility to import a large text file (2GB), but the log file grows to around 70GB, vs. the mdf file at around 10GB.
I need to know whether i can shrink the logfile during this BCP process?
I am using the BCP utility to import a large text file (2GB), but the log file grows to around 70GB, vs. the mdf file at around 10GB.
I need to know whether i can shrink the logfile during this BCP process?
If your database is in "full recover model" you need to perform a transaction log backup to free up the logfile but that will only work if the transaction is completed. If you are running the bcp without the -b switch the whole file is imported in one transaction. To split the transaction into smaller blocks you can specify the switch -b along with a number to set a batch size.
If you are using "simple recover model" you do not need to do any transaction log backup during the load of the file.
Quoute from Books online :
Specifies the number of rows per batch of imported data. Each batch is imported and logged as a separate transaction that imports the whole batch before being committed. By default, all the rows in the data file are imported as one batch. To distribute the rows among multiple batches, specify a batch_size that is smaller than the number of rows in the data file. If the transaction for any batch fails, only insertions from the current batch are rolled back. Batches already imported by committed transactions are unaffected by a later failure
Everything that Hakan says is good. But also, you can run the CHECKPOINT command during the import, which may give the system the chance to truncate the log more often, thereby keeping it smaller. You can put the CHECKPOINT command in a loop so that it runs repeatedly - in a separate connection, of course.
Edited: This will impact a system using the Simple recovery model.
It is unlikely that you'll be able to shrink the log during the import if the import is running as a single transaction. Until the transaction is complete, it needs to be held within the log. If your log is growing that much, you might want to consider breaking down the import into multiple steps so that you can get more frequent commits and therefor smaller transactions.
You could change the recovery model of the database prior to the BCP, to Bulk-Logged or Simple, if you know you are not going to have anything going on that you would need to be able to do a point in time restore. Then after the import change back to FULL and do Full and Log backups.
I'd also add that the growth could also be due to changed in indexes as well. You might also want to drop and re-create the indexes on the affected table(s) during the process.
No one has followed this question yet.