Hi, I need to speedup an alter table command that changes a single column's width from char(3) to char(5), or any other table changes. This process now takes about 6 hours to do. The table is 400gb, yes Gigabytes in size and contains about 50 columns. The are 2 varchar max fields and one varbinary max fields. The table has about 6 million rows. Before changing the the hardware it would timeout using the 65535 timeout value. I have changed the motherboard to a x99 board with an Intel I7-6850 6/12 core cpu and separating the data to its own Samsung 860 pro NVME 1tb ultra drive and the .log drive to its own Samsung 860 pro NVME 1tb ultra drive, The tempdb is on the data drive,currently after moving it from a much slower 1tb raid0sata drive. It just takes forever to make any changes to the table. I am a developer and long time ago I learned that developing for big data at the start is more productive than developing for small data and then have to deal with slowness after the data grows, which can cause some complete rewrites to get acceptable performance later on. Also note that there is little processor and disk activity going on while it is restructuring the table. I am using MSDN MSSQL 2012 Enterprise for the development effort on a Server 2008r2 machine with 32gb ddr4 ram. What goes on under the covers when restructuring this. Is there any references that explains this? I just recently purchased Professional SQL Server 2012 Internals and Troubleshooting and Microsoft SQL Server 2012 Internals (Developer Reference) and lots of Googling in an effort to solve this... So, I'm here asking how can I solve this... Thanks, Stanley Barnett
SQL Server has to rebuild the whole table as it a **fixed** length column and it takes a while to write the data to the logfile and the datafile. In SQL Server 2016 and later you can change datatype online as long as all the requirements are met.
Hi Hakan, Yes, I expected it probably creates a second copy that includes the changes. Why would this restructuring process be logged? Or is the building of the 2nd copy considered the logging process, because if it fails, the copy would be thrown away and only after the copy succeeded would the original get deleted???? How is that done? Doesn't make much sense to log the process twice? How many processors will SQL Server 2012 Enterprise use for this sort of task? But taking hours and hours with little activity on the super fast drives and processors indicates something else wouldn't it, but what could it be? How is that troubleshooted? Thanks, Stanley