Slick84 avatar image
Slick84 asked

Update Statistics on Production Server


I have a slow performing query and to in order to tune it I'm planning to take a step-by-step approach. My query has started performing slowly on 12/8 moving forward which is when the last time three out of 7 of the statistics were updated.

The primary key statistic was updated last on 12/8 and hasn't been updated after that and another statistic on a unique (non-clustered) column was also updated on 12/8 and hasn't been updated after that. Another non-unique, non-clustered columns statistics were updated on 12/8 and haven't been updated after that either. The system statistic was updated last on 12/10.

This query I'm referring to (stored procedure) used to run from 30-40 min now to 5+ hours. I'm thinking updating the statistics might help.

My question is... is it feasible to update the statistics on these columns during production hours? Or is it typically recommended to update the statistics later on at night? I have never updated statistics before and am not familiar with how long it takes.

Thanks in advance for all replies.

Regards, S

10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Grant Fritchey avatar image
Grant Fritchey answered

Usually I update all the statistics in the database and I do this in off-hours. However, there are instances where the cost, if any, of updating statistics is far exceeded by the benefit it offers. For example, we have a rather poorly designed table. For all sorts of reasons the business won't let us fix it. But, due to it's poor design, the stats go out of date in a matter of hours. We have a special job that runs once every half an hour that updates the stats on that one table, all day long. It does provide a slight performance hit, but the benefits (good execution plans) are well worth it.

So, in short, depending on your situation, yes, you can, and should, run the stats updates during production hours. Just be sure to monitor to see if you're causing problems or, conversely, if you're not updating enough.

10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Slick84 avatar image Slick84 commented ·
How do I know if I'm not updating enough? Also, how could I cancel update of the statistics if I see some unusual DB activity, too much I/O, etc. just incase?
1 Like 1 ·
Grant Fritchey avatar image Grant Fritchey ♦♦ commented ·
No, don't cancel just in case. If you see excessive blocks on the query('s) that accesses your table, and those blocks coincide with the stats update, you may need to reconsider, reschedule, whatever. Testing & data collection are the key. How do you know if you're updating enough? Are you getting good execution plans? Then you're updating enough. Are the execution plans going all wonky and the estimated & actual rows aren't matching, then you may need to update more often, or, you may need to update with a FULL scan. It really depends. Collecting performance data is the only way to know.
1 Like 1 ·
Grant Fritchey avatar image Grant Fritchey ♦♦ commented ·
Stupid ones. Or do you want more details than that? It's fairly wide keys and lots of inserts & deletes. In fact, they don't update anything because "that's expensive." Instead they delete and insert all the records every time they do an update. Shockingly, it leads to a lot of fragmentation and the statistics are out of date, despite the fact that not that much data has changed (as a percentage of the amount stored). Believe me, we've had fights with this development team and it's original DBA.
1 Like 1 ·
Håkan Winther avatar image
Håkan Winther answered

Great answer from Grant (as always)! I just want to make sure your performance issue is because of statistics. Yes, statistics is important for the performance, no dubt about that, and you say you are not familiar with "update statistics", how about index fragmentation? Are you familiar with that? And what about t-sql tuning?

I haven't seen your stored procedure, and i have no clue about the size of your database, but if you have a well optimized procedure, and an optimized database design, then from my experience, you have a large database with like 100 000 000+ of records in your involved tables. If you don't have a large database, then i recommend you to look at your stored procedure and your database design.

Sometimes there are just a small change in the code or a new index and a procedure that is taking 30 min will take 30s instead, I have done it a lot of times.

A procedure that goes from 30 min to 5 hours without any changes in the schemas may depend on :

  • Outdated statistics like you suspect
  • Fragmented index
  • Changed behavior of end users, like they are causing a lot of table scans, causing memory or io bottlenecks.
  • Increased size of a table. With poor design, everything seems fine until you reach a threshold, causing SQL server to use another execution plan.
  • etc.

"Not planning for performance is planning for disaster"

10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.