question

puneetpathak1979 avatar image
puneetpathak1979 asked

Performance Issue

I have database on production which is 26 GB in size. Some of my queries are facing performance issues for past few weeks. Tables which these queries are using contains 5000000+ records in each table. Tables are indexed and queries are also optimized to best of my knowledge, but every now and than at peak hours CPU usage reaches 100% and when I see resource monitor SQL is using maximum CPU. When i run -- SELECT * FROM sys.dm_os_wait_stats -- result shows **wait_type** - PAGEIOLATCH_SH, **waiting_tasks_count** - 2454378, **wait_time_ms** - 75377662, **max_wait_time_ms** - 11980, **signal_wait_time_ms** - 73238 I have 8 GB RAM on my production server. What step should i take to improve performance. Thank you for help. Puneet Pathak
troubleshooting
1 comment
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

@SQLShark avatar image @SQLShark commented ·
Can you post the execution plan of one of the long running queries?
0 Likes 0 ·

1 Answer

·
Magnus Ahlkvist avatar image
Magnus Ahlkvist answered
I agree that you should post some long running query and its execution plan. But you say that this has occured recently, and as I understand without changing much. My first place to go would be index fragmentation and statistics. Do you have management jobs running to defragment indexes and update statistics on your tables? If not, you may want to put that in place. To check how fragmented your indexes are, you can run this query: select * from sys.dm_db_index_physical_stats(DB_ID('your database'),0,-1,0,NULL); Don't bother too much about relatively small indexes (less than a few hundred pages in column page_count) - these indexes will be fragmented however much you reorg or rebuild your index. I would also suggest manually updating statistics on your tables regularely, instead of just hoping for auto update statistics to do the job for you. There are two reasons for that. First, the threshold of number of rows changed before auto update stats kicks in is pretty large and you depending on your data distribution that could mean bad statistics. The second reason is that auto update stats updates statistics when SQL Server realises that they need to be updated. That's normally in the middle of running a query. The optimizer realises that the statistics are old and therefore issues update stats, and your query will be left waiting for that to happen before SQL Server proceeds to create an execution plan. You didn't mention which version of SQL Server you are running, but from SQL Server 2008R2, SP2, you have a dynamic management function sys.dm_db_stats_properties which you can use to look at how many changes has been made to the underlying column(s) for each statistic object in SQL Server. Use (and change) this query to explore your statistics objects. select * from sys.stats s cross apply sys.dm_db_stats_properties(s.object_id,s.stats_id);
2 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

puneetpathak1979 avatar image puneetpathak1979 commented ·
Thank you Magnus for quick reply, I followed your instruction and found that most of my indexes were having fragmentation percent of 70%-80%. I am de-fragmenting them. Hope everything will come to normal. Thank you Puneet Pathak
0 Likes 0 ·
Magnus Ahlkvist avatar image Magnus Ahlkvist commented ·
I think you should consider setting index defragmentation and updating of statistics as a scheduled job. We're using Ola Hallengren's scripts for that. Look at http://ola.hallengren.com where you can download the scripts and get instructions on how to use them.
0 Likes 0 ·

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.