Dave Myers avatar image
Dave Myers asked

Index fragmentation and fill rates

I have quite a few tables that have indexes that tend to fragment quickly (need to to be defragmented daily). Can adjusting fill rates reduce the speed at which these indexes fragment? As a Best Practice or standard, what fill rate is generally the best? (I know it depends on the data and how it is used)
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

1 Answer

Håkan Winther avatar image
Håkan Winther answered
Yes if you use a lower fill factor it will take longer before the index is fragmented. A best practice? As always "it depends"! It depends on how often the data is inserted / updated / deleted, if it is an identity column etc. 100% fill factor is excellent for lookup tables that is "static" 100% fill factor may be a good start for an identity column, but it is no guarantee. 80 - 90% fill factor may be a good start for other indexes You need to try the best solution for your database. Keep in mind that lower fill factor will increase the time it takes to search an index. Build your index with a "best guess" and then monitor your database with the dynamic management view sys.dm_db_index_operational_stats. It contains a column "leaf_allocation_count " which according to BOL means "Cumulative count of leaf-level page allocations in the index or heap. For an index, a page allocation corresponds to a page split." (SQL 2005 or later) Keep in mind that it will reset after a restart (which you hopefully never need to do)
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

A good answer and well said! When I am putting data on a table that changes rapidly and where the column being indexed is nonsequential, I tend to start at a 50% fill rate and then adjust. Of course, that can lead to indexes being much larger than they need to be so that is only a good number if you do not have space constraints to worry about.
1 Like 1 ·
One of the easiest ways to improperly design a table such that the indexes defragment easily is to fail implementing the guideline to **always** ensure that the clustered index values are ever increasing (never changing goes without saying of course). For example, if you have something designed as terribly as: - customer table clustered by last name (or SSN) - products table clustered by GUID with newid() as default - daily quotes table seemingly properly clustered by date_entered, ticker, but the service which populates prices once per week gets a week worth of data for a single ticker, inserts it and then goes after next ticker Most frequently encountered abuse is caused by the GUID clustered PK with newid() as default, this one is a clear "winner". In any case, properly designed indexes should withstand DML activities and if they don't then it could be a good time to check their design.
1 Like 1 ·
Superb points by all three of you! @Dave Myers - Pay extra attention to what Timothy said about nonsequential data and what Oleg said about sequential inserts. These are key to keeping fragmentation low. If you are lucky and have somewhat smaller tables and enterprise edition, online index rebuilds are possible to help alleviate problems without blocking systems (allows a rebuild during normal operation). You could rebuild / reorg multiple times a day if the performance benefit is high enough
0 Likes 0 ·

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.