Hello All, It is advised not to have more indexes on a table as it reduces the performance; I have not used more than 27 indexes on a table. My question is we have 250 (1 clustered and 249 non clustered indexes) and I am sure that we may not have created more then 30, then why did Microsoft has increased the number of indexes from 250 in MS SQL server 2005 to 1000 indexes in MS SQL server 2008. Can anyone help me...? My question is why did microsoft provided with 1000 indexes when it is clear that we are not going to use it. Is there any other reason for it? Thank you,
27 is extremely high. Most tables I support probably only have one, the clustered index. The rest probably have 2-3 additional indexes. I think 4-5 is the most I've ever done (although I've seen some star schemas with more). From the sounds of it, I don't think you're using indexes appropriately. While SQL Server can do index intersection, where it does a seek between two or more indexes on a single table and then joins their data together, it's a pretty rare event. Lots of people count on this behavior, despite the fact that it doesn't happen very often, and instead of building a compound index or use the INCLUDE clause so that the indexes support their query needs, they put a single index on lots of columns. That's a very problematic approach for any number of reasons.
There may be valid reasons for having lots of indexes; however, there is an overhead associated with maintaining an index. I can't see it being particularly valuable in an OLTP environment, assuming the code is properly written to take advantage of some carefully constructed indexes, but in an OLAP environment more indexes are more likely to be useful.
If you have that many indexes on the table, the most of them are probably unused. You can find out how your indexes is used by looking at the operational stats. To do that you can use the dynamic management view [sys.dm_db_index_operational_stats] SELECT * FROM sys.dm_db_index_operational_stats(NULL, NULL, NULL, NULL); This will give you a list of all indexes and the operational stats. If you want more detailed information you can join the sys.indexes view and limit the result of sys.dm_db_index_operational_stats to the current database by adding DB_ID() to the first parameter. Remember that the dynamic management views will be reset on reboot of your SQL server instance. If you have a low number in : range_scan_count singleton_lookup_count forwarded_fetch_count and high number in all xx_INSERT_COUNT, xx_UPDATE_COUNT, xx_DELETE_COUNT columns (where xx is leaf or nonleaf) then your index is very inefficient and gives you bad performance. You can also have a look at the xx_allocation_count columns to see if your indexes are causing a lot of page splits. As a matter of fact, take a look at the specification of the mangement view and you will find a lot more to look at to be able to determine how your performance is affected by your indexes. (like wait stats etc.) In a DataWareHouse solution you can have a lot of indexes, because or the ratio between reads and writes, but i recommend you to keep track of how they are used. :
I agree with Grant Fritchey, as he said 27 indexes on a table is high unless you are not in OLTP database and user updates on that tables are very rare. In your case 27 indexes!, If you are doing an insert operation SQL Server needs to maintain your 27 Indexes, so it needs to perform additional 27 Writes to the index pages, and if your index fill factor are not configured properly then the overhead of Page split also. I recommend you to have a look at your existing index structure, figure out the unused indexes,expensive indexes, Remove those. Create clustered index if it is missing. Prefer covered indexes. Design the indexes smartly, Try to choose a highly selective value as your key column.