question

ashfaqkhan avatar image
ashfaqkhan asked

Deleting millions of rows without impacting transaction log

Our client database is growing at increasing pace, more specifically entities like Auditing & Logging which are growing much greater speed. For instance, as of now the Auditing table has ~30 million rows and its is growing with the rate of 1.5 million rows per week. Similarly, the Logging table is growing at the rate of ~1 million rows per week. This table has ~50 million rows. We have decided to archive tables based on our data retention policy & delete some 'N' number of records from these tables when ever archiving jobs runs.

I am looking for best advice for defining the chuckSize which will not impact transaction logs of sql server db or table locking. I know this value cannot be straight way derived, we need to run different test scenarios to come with this magic number.

archivingtransaction logadvice
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

1 Answer

·
Magnus Ahlkvist avatar image
Magnus Ahlkvist answered

I would look into partitioning those tables. Create one partition per day, and pre-create partitions for the coming year. That would make archiving simple and not blocking. You can bulk data out from the old partitions to the archive and then truncate the partions and finally merge the empty partitions. And as a yearly maintenance, create new partitions for next year. SQL Server supports 15.000 partitions for a partitioned table.

This would also make searching for data in the logs faster, given that you have the partitioning key (datetime or datetime2 I'd guess) is in one of the predicates.

10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.