I have a table that is expecting to get 40M rows every day. This data is incremental, but with enough changes that warrant it to be treated like it is brand new.
The original idea was to DELETE-INSERT, but that filled up the Tx log.
The solutions were to TRUNCATE-INSERT or DROP-INSERT_INTO.
The current process drops and recreates the tables every day. Therefore, as data is added, pages are allocated.Is there a better way for me to handle this than a DROP-INSERT_INTO?
asked May 03 '11 at 05:56 AM in Default
If you need to delate a large amount of old data and insert a lage amount of new data and have Enterprise version of SQL server, table partitioning could be an option.
The partitioning possibility depends on your concrete needs and scenario, but you didn't provided much info about the process itself.
You can check MSDN for some details including The Data Loading Performance Guide and [Designing Partitioned Tables and Indexes].: http://msdn.microsoft.com/en-us/library/ms175533.aspx
answered May 03 '11 at 11:25 AM
You could do the DELETE-INSERT in batches, with Tx log backups.Other possibilities (depending on what dependencies hang off this table): insert into new table, rename old table, rename new table as old table, drop old table.
answered May 03 '11 at 06:55 AM
Kev Riley ♦♦