question

gotqn avatar image
gotqn asked

How to optimize clustered index rebuild?

I need to rebuild the clustered index of the table that has more then 1.6 millions rows, more then 20 non-clustered indexes and 80 columns in order to reclaim space after two `NVARCHAR(MAX)` columns were deleted. I have try using the following command: ALTER INDEX [PK_A] ON [dbo].[B] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) GO and using `DBCC CLEANTABLE` command. In both cases it takes more then 19 hours. Is there a way to optimize the clustered index rebuild operation? Is it possible when a clustered index is rebuild and the size table is reduced (from 40 GB to 10 GB), the RID to be updated, too and this to cause the non-clustered indexes to be rebuild either.
sql-server-2012clustered-index
1 comment
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Kev Riley avatar image Kev Riley ♦♦ commented ·
What columns define the clustered index? Is it fairly simple or multiple columns?
0 Likes 0 ·
Tom Staab avatar image
Tom Staab answered
I have rebuilt much larger tables in much less time, so I am concerned why this is taking so long. I recommend checking Performance Monitor or similar while running. Is there potentially an underlying storage system issue? Rebuilding just the clustered index should not affect the nonclustered indexes because the key is not changing. How many partitions are there? I don't have experience with DBCC CLEANTABLE, but MSDN claims "Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation." The default is to run for the entire table. Perhaps it would be better for you to set a batch size. Since you have 1.6 million rows, you could maybe try batches of 200,000 rows. DBCC CLEANTABLE (DatabaseName, 'SchemaName.TableName', 200000); Just in case you are dealing with multiple index issues at once, you could try recreating the clustered index. If you do that, it is very important that you specify the "DROP_EXISTING=ON" option rather than separately drop and create the clustered index.
4 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Kev Riley avatar image Kev Riley ♦♦ commented ·
I'd agree about the time taken for the size specified. Maybe the data/log file is getting full? Maybe autogrowth is set at 1Mb increments, and autoshrink is ON?
1 Like 1 ·
Tom Staab avatar image Tom Staab ♦ commented ·
omg, Kev. You just reminded me about a previous job. When I started, the team's main database was just over 1 TB. It was all one filegroup with one file and autogrowth set to 1 MB.
0 Likes 0 ·
Tom Staab avatar image Tom Staab ♦ commented ·
Also check other running processes. Is something blocking the rebuild?
0 Likes 0 ·
Kev Riley avatar image Kev Riley ♦♦ commented ·
HA! Just found a db at a client that had autogrowth every night at same time as index maintenance rebuilds, followed by shrink => constantly every night disks were hammered as a monolithic table with 64% fragmentation was rebuilt and file slowly grew, and then subsequently re-fragged!!
0 Likes 0 ·
Kev Riley avatar image
Kev Riley answered
Might be quicker to drop all non-clustered indexes, rebuild, and then reapply non-clustered?
5 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Kev Riley avatar image Kev Riley ♦♦ commented ·
The other option is create a second, empty, version of the table - with the right CIX, insert the data from the original table, and then recreate the NCIXs. You need space to hold 2 copies of the data, but the advantage here is that nothing is trying to access the data in the new table as the inserts happen. The final step is to rename the old table, and rename the new table as the old name.
2 Likes 2 ·
Kev Riley avatar image Kev Riley ♦♦ commented ·
Because every non-clustered index contains a row locator (the clustered index key) - so every modification to the clustered index has to be applied to the 20 non-clustered indexes too -> more locks, more resources, longer execution time.
1 Like 1 ·
Fatherjack avatar image Fatherjack ♦♦ commented ·
yeah, I'd be tempted to try dropping all indexes and rebuilding on a test system and see how it goes.
1 Like 1 ·
gotqn avatar image gotqn commented ·
I suppose that, but I am trying to find some evidence why this might be quicker.
0 Likes 0 ·
gotqn avatar image gotqn commented ·
Thanks, I am going to test this and come back with the results.
0 Likes 0 ·

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.