Update Statistics on Production Server


I have a slow performing query and to in order to tune it I'm planning to take a step-by-step approach. My query has started performing slowly on 12/8 moving forward which is when the last time three out of 7 of the statistics were updated.

The primary key statistic was updated last on 12/8 and hasn't been updated after that and another statistic on a unique (non-clustered) column was also updated on 12/8 and hasn't been updated after that. Another non-unique, non-clustered columns statistics were updated on 12/8 and haven't been updated after that either. The system statistic was updated last on 12/10.

This query I'm referring to (stored procedure) used to run from 30-40 min now to 5+ hours. I'm thinking updating the statistics might help.

My question is... is it feasible to update the statistics on these columns during production hours? Or is it typically recommended to update the statistics later on at night? I have never updated statistics before and am not familiar with how long it takes.

Thanks in advance for all replies.

Regards, S

more ▼

asked Dec 18, 2009 at 11:56 AM in Default

avatar image

1.3k 75 104 147

(comments are locked)
10|1200 characters needed characters left

2 answers: sort voted first

Usually I update all the statistics in the database and I do this in off-hours. However, there are instances where the cost, if any, of updating statistics is far exceeded by the benefit it offers. For example, we have a rather poorly designed table. For all sorts of reasons the business won't let us fix it. But, due to it's poor design, the stats go out of date in a matter of hours. We have a special job that runs once every half an hour that updates the stats on that one table, all day long. It does provide a slight performance hit, but the benefits (good execution plans) are well worth it.

So, in short, depending on your situation, yes, you can, and should, run the stats updates during production hours. Just be sure to monitor to see if you're causing problems or, conversely, if you're not updating enough.

more ▼

answered Dec 18, 2009 at 12:02 PM

avatar image

Grant Fritchey ♦♦
137k 20 46 81

How do I know if I'm not updating enough?

Also, how could I cancel update of the statistics if I see some unusual DB activity, too much I/O, etc. just incase?

Dec 18, 2009 at 12:36 PM Slick84

No, don't cancel just in case. If you see excessive blocks on the query('s) that accesses your table, and those blocks coincide with the stats update, you may need to reconsider, reschedule, whatever. Testing & data collection are the key.

How do you know if you're updating enough? Are you getting good execution plans? Then you're updating enough. Are the execution plans going all wonky and the estimated & actual rows aren't matching, then you may need to update more often, or, you may need to update with a FULL scan. It really depends. Collecting performance data is the only way to know.

Dec 18, 2009 at 01:02 PM Grant Fritchey ♦♦

Stupid ones. Or do you want more details than that? It's fairly wide keys and lots of inserts & deletes. In fact, they don't update anything because "that's expensive." Instead they delete and insert all the records every time they do an update. Shockingly, it leads to a lot of fragmentation and the statistics are out of date, despite the fact that not that much data has changed (as a percentage of the amount stored). Believe me, we've had fights with this development team and it's original DBA.

Dec 21, 2009 at 05:12 PM Grant Fritchey ♦♦
(comments are locked)
10|1200 characters needed characters left

Great answer from Grant (as always)! I just want to make sure your performance issue is because of statistics. Yes, statistics is important for the performance, no dubt about that, and you say you are not familiar with "update statistics", how about index fragmentation? Are you familiar with that? And what about t-sql tuning?

I haven't seen your stored procedure, and i have no clue about the size of your database, but if you have a well optimized procedure, and an optimized database design, then from my experience, you have a large database with like 100 000 000+ of records in your involved tables. If you don't have a large database, then i recommend you to look at your stored procedure and your database design.

Sometimes there are just a small change in the code or a new index and a procedure that is taking 30 min will take 30s instead, I have done it a lot of times.

A procedure that goes from 30 min to 5 hours without any changes in the schemas may depend on :

  • Outdated statistics like you suspect
  • Fragmented index
  • Changed behavior of end users, like they are causing a lot of table scans, causing memory or io bottlenecks.
  • Increased size of a table. With poor design, everything seems fine until you reach a threshold, causing SQL server to use another execution plan.
  • etc.

"Not planning for performance is planning for disaster"

more ▼

answered Dec 21, 2009 at 05:14 AM

avatar image

Håkan Winther
16.6k 37 46 58

(comments are locked)
10|1200 characters needed characters left
Your answer
toggle preview:

Up to 2 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

SQL Server Central

Need long-form SQL discussion? SQLserverCentral.com is the place.



asked: Dec 18, 2009 at 11:56 AM

Seen: 6910 times

Last Updated: Dec 18, 2009 at 11:56 AM

Copyright 2018 Redgate Software. Privacy Policy