I have a table that is expecting to get 40M rows every day. This data is incremental, but with enough changes that warrant it to be treated like it is brand new. The original idea was to DELETE-INSERT, but that filled up the Tx log. The solutions were to TRUNCATE-INSERT or DROP-INSERT_INTO. The current process drops and recreates the tables every day. Therefore, as data is added, pages are allocated. Is there a better way for me to handle this than a DROP-INSERT_INTO?
If you need to delate a large amount of old data and insert a lage amount of new data and have Enterprise version of SQL server, table partitioning could be an option. 1. Insert data into a temp table 2. Alter parititon function and partition scheme to add a new partition for the new data 3. Switch the temp table into the new partition 4. Switch out partition with old data into a temp table 5. Drop/truncate a the temp table 6. Alter the partition function and scheme to merge the space left after switching the partition out. The partitioning possibility depends on your concrete needs and scenario, but you didn't provided much info about the process itself. You can check MSDN for some details including [The Data Loading Performance Guide] and [Designing Partitioned Tables and Indexes]. :
You could do the DELETE-INSERT in batches, with Tx log backups. Other possibilities (depending on what dependencies hang off this table): insert into new table, rename old table, rename new table as old table, drop old table.