question

Ashesh avatar image
Ashesh asked

Progress of update query

We have a very huge table having close to billions of records. Is there anyway we can see the progress of the update statements issued?
tsqlupdate
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

WilliamD avatar image
WilliamD answered
I must ask if it is necessary to do an update on billions of rows, why not batch process them? It can be orders of magnitude quicker to batch up large updates/inserts/deletes to a table. This has to do with locking, blocking and logging of these actions. I suggest you look into creating a job/procedure that will process `N` rows and possible place that in a loop or a regularly re-occuring SQL Agent job. If you do it this way, you will see how much faster it can be, as well as reducing locking/blocking on your system (don't know if this is an issue at present).
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Matt Whitfield avatar image
Matt Whitfield answered
SELECT [percent_complete] FROM [sys].[dm_exec_requests] WHERE [session_id] =
4 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Matt Whitfield avatar image Matt Whitfield ♦♦ commented ·
@WilliamD - very good point :) Need more coffee in the morning...
1 Like 1 ·
Håkan Winther avatar image Håkan Winther commented ·
the only operations that return percent_complete are -backup / restore -dbcc checkdb / checktable / etc. -dbcc shrinkdatabase / shrinkfile -dbcc indexdefrag -alter index reorganize -rollback operations
1 Like 1 ·
Cyborg avatar image Cyborg commented ·
Thanks Matt! Its new to my knowledge
0 Likes 0 ·
WilliamD avatar image WilliamD commented ·
@MAtt - According to BOL this is only filled for certain operations, including rollbacks. There is an explicit message saying: This does not provide progress data for queries. I assume an update is part of this exclusion
0 Likes 0 ·
Håkan Winther avatar image
Håkan Winther answered
I agree with WilliamD. It is not an optimal solution to try to update billions (or even millions) of records in one single batch. Besides the locking issues, that you may partially solve with "read committed snapshot" in SQL 2005 and later (that will demand a lot of IO operations in the TempDB), you may have issues with the transaction logs that will probably grow very large even with simple recovery mode as everything is handled in one huge transaction. Try to keep transactions as short as possible. If you need to do this update when the system "online" it will take a lot longer due to the fact that you can't disable foreign key constraints and indexes. If you can take the system "offline", you can disable all the foreign keys to the table and all the indexes before the update and add them later.
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Magnus Ahlkvist avatar image
Magnus Ahlkvist answered
I agree to what WilliamD and Håkan Winther says. But I'd also like to add: What update operations do you need to perform to billions of rows at once? Is this something that is regularely done? Please tell something more about the scenario. There are perhaps other design options to explore?
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.