If I have a large sized table with approx. 1 billion rows, and a few indexed views dependent on the table, what would be the best approach to change the data type of one of the columns from 'int' to 'bigint' ?
Ok, i've dealt with this nightmare before trying to change an nvarchar(max) to varchar(max) on a 500GB table to recoup half the storage. Do NOT alter table and change the data type, it will do this in a transaction and need 1.5+ the LOG storage of the table, and when you run out of transaction log space it will take 5x the time to roll back. You have a couple options: 1 Drop all indexes and indexed views on the table. Create a new column of type bigint and update the new column in batches to control log growth. Then drop the original int column. The problem is this is not a variable length column so DBCC CLEANTABLE will not release the storage from the original int column. I believe only rebuilding the clustered index will do this. I have not worked on 2005 for a while so someone please correct me if I am wrong. Recreate all indexes and index views on the table. 2 if you have sufficient storage (an extra 1.5x the table size) create a new table with the new data types and move the data over in batches to control log growth. then drop the original table. This is what I ended up doing. There is no elegant solution. Good luck.