question

Katie 1 avatar image
Katie 1 asked

Index structure : Performance issue

All, We have a database system that was designed to process data and there are some bunch of calculations and on the available data, and produce reports for every transaction day. This is a database intensive application. The problem i am facing right now is the performance, the queries are taking a little longer. Inserting takes up long time, as the data already inserted is being used for calculations. So planning to take a look, how to improve performance. Having to take a look at the indexing structure it seems like, the tables that need indexes are the also the tables where there is lot of inserting happening. So having indexes might hamper the insert. Any suggestions how to handle this situation. I am also planning to change the recovery model to bulk logged, as another option to improve performance. what would the experts suggest? Please advice. Thanks.
sql-server-2008sqlperformanceindexes
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Tim avatar image
Tim answered
You have just identified the constant struggle of every DBA. What worked great the last month now all of a sudden doesn't work so well. An index that works for a small amount of data might not perform as well with a very large set of data. The first thing that came to mind with reading your question above is your statistics. Do you have a job that updates your stats every night or weekly or do you rely on SQL Server to update your statistics with autoupdatestatistics? The auto update feature within SQL Server only kicks in after a certain percentage of the data has changed. That may be acceptable when the table is small, however as the database grows and that table gets really huge, the time between the stats being update grows as well. You are correct to be concerned with adding unnecessary indexes and that adding indexes will impact inserts, however is the index is needed to remove table scans for your queries the greater good will be to create the index. By reducing table scans you also reduce valuable I/O on your disk. Start looking at the execution plans of those queries to see where you can tuning.
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Grant Fritchey avatar image
Grant Fritchey answered
Based on everything you said, I didn't hear you say anything about execution plans. I would spend some time see what processing is occurring on the system and how it is being done. You may have indexes in place, but they might not be well used. Looking at the exec plans will really let you know. A well structured clustered index can actually increase performance over that of a heap table. So you need to examine that the cluster is in the right place. Tim has a great point regarding statistics. It's possible they're aging, but not updating well or that the scanned nature of the auto update stats is not adequate. I have not see a performance increase using bulk-logged. Have you tested it? Since you know the hot spots, you need to focus there. Partitioning might be an option, maybe. Indexed views to pre-aggregate the calculations might be an option, maybe. Hard to say much more with out more details. Whatever you do, don't assume it's going to help. Test, test, test so that you know what helps and what doesn't.
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.