We are up grading from one version to another in our new application (from v1 to v2). Most of our users are on SQL Server 2012. We have larger customers now with a lot more data than we did when we started this application 8 years ago. We decided on our next version that we were going to go to bigint. Now we are there and there are eight tables I need to be upgraded to bigint. I've written some SQL code that will drop all the indexes related to the tables that are being updated along with their foreign keys. Then drop the table original table and rename the temp table to the proper table name and copy all the data. Then reapply all indexes and foreign keys. After I run this upgrade it works fine, but DB is just about twice the size it was before I started. Is this the right approach or should I be doing something different?
Altering table doesn't necessarily save you any time at all. If you bulk copy all the eight tables to the new tables, using eg SSIS, whilst in simple recovery mode, you will probably get better performance than doing an ALTER TABLE ALTER COLUMN, since ALTER TABLE ALTER COLUMN is fully logged and a bulk load can be minimally logged. The size of the database will grow substantially from altering datatypes to bigint. A bigint is twice the size of a an int. If the database growing to twice the size of the original is normal or not depends on whatever other data you have in the database.