Our client database is growing at increasing pace, more specifically entities like Auditing & Logging which are growing much greater speed. For instance, as of now the Auditing table has ~30 million rows and its is growing with the rate of 1.5 million rows per week. Similarly, the Logging table is growing at the rate of ~1 million rows per week. This table has ~50 million rows. We have decided to archive tables based on our data retention policy & delete some 'N' number of records from these tables when ever archiving jobs runs.
I am looking for best advice for defining the chuckSize which will not impact transaction logs of sql server db or table locking. I know this value cannot be straight way derived, we need to run different test scenarios to come with this magic number.