question

StanleyBarnett avatar image
StanleyBarnett asked

Speedup Alter Table Commands

Hi, I need to speedup an alter table command that changes a single column's width from char(3) to char(5), or any other table changes. This process now takes about 6 hours to do. The table is 400gb, yes Gigabytes in size and contains about 50 columns. The are 2 varchar max fields and one varbinary max fields. The table has about 6 million rows. Before changing the the hardware it would timeout using the 65535 timeout value. I have changed the motherboard to a x99 board with an Intel I7-6850 6/12 core cpu and separating the data to its own Samsung 860 pro NVME 1tb ultra drive and the .log drive to its own Samsung 860 pro NVME 1tb ultra drive, The tempdb is on the data drive,currently after moving it from a much slower 1tb raid0sata drive. It just takes forever to make any changes to the table. I am a developer and long time ago I learned that developing for big data at the start is more productive than developing for small data and then have to deal with slowness after the data grows, which can cause some complete rewrites to get acceptable performance later on. Also note that there is little processor and disk activity going on while it is restructuring the table. I am using MSDN MSSQL 2012 Enterprise for the development effort on a Server 2008r2 machine with 32gb ddr4 ram. What goes on under the covers when restructuring this. Is there any references that explains this? I just recently purchased Professional SQL Server 2012 Internals and Troubleshooting and Microsoft SQL Server 2012 Internals (Developer Reference) and lots of Googling in an effort to solve this... So, I'm here asking how can I solve this... Thanks, Stanley Barnett
developer2012alter-table
3 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

JohnM avatar image JohnM commented ·
Any chance you can publish the table schema?
0 Likes 0 ·
Jeff Moden avatar image Jeff Moden commented ·
@StanleyBarnett . If you could, post the CREATE TABLE statement including all indexes, keys (including foreign keys), constraints, and indexed views that you might have on the table. I might be able to show you a couple of things to not only make such changes to the table much faster in the future but would have some other side benefits, as well.
0 Likes 0 ·
Jeff Moden avatar image Jeff Moden commented ·
@StanleyBarnett, Checking one more time. Please see my comment above.
0 Likes 0 ·
Håkan Winther avatar image
Håkan Winther answered
SQL Server has to rebuild the whole table as it a **fixed** length column and it takes a while to write the data to the logfile and the datafile. In SQL Server 2016 and later you can change datatype online as long as all the requirements are met.
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

StanleyBarnett avatar image
StanleyBarnett answered
Hi Hakan, Yes, I expected it probably creates a second copy that includes the changes. Why would this restructuring process be logged? Or is the building of the 2nd copy considered the logging process, because if it fails, the copy would be thrown away and only after the copy succeeded would the original get deleted???? How is that done? Doesn't make much sense to log the process twice? How many processors will SQL Server 2012 Enterprise use for this sort of task? But taking hours and hours with little activity on the super fast drives and processors indicates something else wouldn't it, but what could it be? How is that troubleshooted? Thanks, Stanley
2 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Håkan Winther avatar image Håkan Winther commented ·
Every modification in SQL server is logged, some of them only minimal logged. The process will use only one thread in SQL server 2012 i guess as it is an insert. In SQL Server 2016 you will get multithreaded inserts. You could also suffer from blocking issues or filegrowth without instant file initialization. Everytime the files has to grow, the newly allocated space has to be zeroed, causing blocked processes. To troubleshoot this kind of issues, you could use: SELECT * FROM sys.dm_exec_requests In the result you could look at blocking_session_id and last_wait_type.
1 Like 1 ·
Jeff Moden avatar image Jeff Moden commented ·
There are also some pretty serious page splits that occur on the CI and any NCI that contains the already-populated fixed width column change, and all of those get logged, as well. There is a way to almost have no logging at all on this evolution, though (usually doubling the performance and avoids all "bad" page splits.) Waiting for the OP to answer my question in the comment I left on the original post.
1 Like 1 ·
StanleyBarnett avatar image
StanleyBarnett answered
OK, so where does the filegrowth values come from? The original database????
2 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Håkan Winther avatar image Håkan Winther commented ·
Yes, and when the database was created the values could have been inherited from the model database. If you change the model database, these settings could be inherited for new databases, unless you specify something else. You could speed up the growth process by give the SQL engine user the "perform volume maintenance task"
0 Likes 0 ·
JohnM avatar image JohnM commented ·
Keep in mind that IFI only helps the data file, not the log file. It'll still need to zero out the pages when it grows.
0 Likes 0 ·

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.