question

Raj More avatar image
Raj More asked

Stopping Clustered Index script

I want to stop a loooooong running script that us generating a clustered index on a 77M row - 20GB table. Is it OK to `kill` the `process`?
clustered-indexkill
1 comment
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Shawn_Melton avatar image Shawn_Melton commented ·
It will take about just as long to kill it, being that it will need to rollback the transaction cause it altered the table.
4 Likes 4 ·

1 Answer

·
Grant Fritchey avatar image
Grant Fritchey answered
Have you been seeing a lot of log growth or tempdb growth during the process? I ask because 20gb is a little bit large for a table, but it's not insanely large. I would expect to see a table like that rebuilt in not too long. If it's really killing you, then it's possibly waiting on other resources & having to grow out files is one of those notorious issues that can impact this sort of operation.
4 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Grant Fritchey avatar image Grant Fritchey ♦♦ commented ·
Well, you can kill the process, but it's like @Shawn_Melton says. You're probably going to wait on it anyway. I'd try to figure out what it's waiting on. Is it blocked? What are the wait stats currently on the server? What are the waits on the process? Stuff like that to figure out what you need to fix.
1 Like 1 ·
Usman Butt avatar image Usman Butt commented ·
@Raj More Glad to know it was the Dev Server :) But how you are going to do this in production? I hope the Production server would have much bigger and better resources to handle such enormous task. But if it is not the case then my question to you is why you need to rebuild a 77 million row table? Depending upon your answer, it will be a good time for either implement partitioning (through VIEWS if do not have the ENTERPRISE EDITION ) OR Copy the data in chunks to a new table in an orderly manner depending upon your environment. Mostly, in such cases downtime is planned if there is no ENTERPRISE EDITION, off-peak hours window is small, the server is lacking resources mainly processor and memory. So you should plan it accordingly. If you can explain your environment and requirement, you will get more tips as already one of our top mentors is active on this post.
1 Like 1 ·
Raj More avatar image Raj More commented ·
Its a DEV server - my file growth is set to 2GB at a time. The database is 140G and has over 30% free space.
0 Likes 0 ·
Raj More avatar image Raj More commented ·
We are building a new data warehouse replacing a legacy system. I was bringing the outputs from PROD into DEV to verify and validate. The queries on the new system we built came back within a few seconds, but due to the lack of indexes on the Legacy system table, they were taking really long. I do have partitioning in mind for my day-over-day archive of business data (only for the really large tables - small tables just use a clustered index). This archiving happens during the day when there is very low server traffic.
0 Likes 0 ·

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.