I have a database size around 2.5 TB which has a table with 3 billion records size about 650 GB, Database is in simple recovery model.
I want to archive the table since it is not required. i am planning to create a new blank database where I can select into the complete 650 GB table from old DB to new DB and then take the compress backup of new DB which may capture 150 GB backup space then remove newDB from server and table from old DB.
Here I just wanted to understand if I Select Into such a large table of 650 GB then what challenges can I face on production like log growth in simple recovery and any other performance issues? whether it will be completed or may fail in between due to something?
Answer by ThomasRushton ·
If you do a SELECT...INTO of a table 650GB in size, then that's going to be a single transaction creating a 650GB table. Which is a helluva big transaction that will get logged into the transaction log - which does get used even if you're using the simple recovery model.
You might want to consider either of the following approaches:
(1) Moving the data in smaller chunks, say 100k rows at a time
(2) Creating a new table for the data that you want to keep, copying/moving the data across into that, and then renaming the two tables,