So I've been thinking about using FILLFACTOR when the storage is entirely on SSD drives. Is the cost-vs-benefit worth it to have a 70%-90% fillfactor to prevent fragmentation, when seek times are <0.1ms? Perhaps it makes sense to keep FILLFACTOR for clustered indexes, but what about non-clustered indexes? My coworker firmly believes that the extra space used for this option is well worth it, even on SSDs. The DB in question is used for OLTP and has 70/30 read/write ratio. What are your thoughts, SSC?
Seeks are an expensive operation, relatively speaking, regardless of the medium. OLTP is already requiring seeks to perform it's job - why introduce easily avoided ones? Why have data shifted around due to page splits unnecessarily? I would still tend to work with a FILLFACTOR. Only testing can truly determine if it is of a concern in your environment. EDIT ---------- Another factor to consider in this is that the pages from disk are brought into RAM where they would still be fragmented and you continue adding unnecessary resource utilization.
I would say that best practice is best practice. Can you guarantee that the database will always be on SSD? Does your business continuity hardware have SSD too? My 2c says build it like its a normal database so use all settings as normal. There are plenty of examples of programmers getting sloppy as CPU performance sped up and then getting caught out as the needs of the system showed up their work when the system was even more critical to the business. Dont let your database settings show you up like that. If you have SSD then the system must be expected to be high end iops so any saving is a saving that other transactions can use.
edit-> This is more about using defragging tools in general than FILLFACTOR per se, but to an extent, that's what FILLFACTOR is about. Fatherjack and Blackhawk might be right, but there is one disadvantage of defragging an SSD - it can shorten the disk's life. So maybe if performance is this drive's most important function and it proves to help the performance, then you might go for it. But, maybe more importantly, Intel [does not recommend using defrag tools]. > **Do I need to defragment my Intel® Mainstream Solid-State Drives (using Windows* Disk Defragmenter* or similar program)?** No. SSD devices, unlike traditional HDDs, see no performance benefit from traditional HDD defragmentation tools. Using these tools simply adds unnecessary wear to the SSD. It is recommended that you disable any automatic or scheduled defragmentation utilities for your Intel SSD. Also, are you using TRIM? If you can use TRIM, the gains may not be worth it according to [this site]. :
+1 to you, Blackhawk and Fatherjack, but I just wanted to add some points. 1. Typically, SSDs are more asymmetric than HDDs. What do I mean by that? I mean that the difference in performance between reads and writes is greater, typically, on SSDs than it is on HDDs. Therefore, writes are **relatively** more expensive on SSDs. So, writes are to be avoided more. Therefore, try and avoid page splits - as these usually involve a fair bit of writing. 2. I wanted to pick up on something that you said that 'perhaps you should keep it for a clustered index'. And I want to ask why? Because, a rule of thumb that has served me well is that a clustered index should be unique, as narrow as possible, and ever-increasing. In fact, it's Oleg who describes it like that, but I have lived by the same mantra. Using a clustered index like that eradicates the need for FILLFACTOR on a clustered index, because you know that you are only ever writing to the end of the logical page sequence. So, using a fill factor would just increase the number of pages involved in a scan. Now, it may be that for some tables you have a clustered index that is different, for whatever reason (and perfectly valid they may be). In that case, use a fill factor. But, my main point is that it's **entirely** likely that the main benificiary from use of fill factor would be non-clustered indexes, where data is naturally inserted in the middle of the page sequence. 3. Don't mix up random and sequential I/O. SSDs absolutely blaze random I/O (seeks) but won't show as much of a benefit on sequential I/O (scans). Don't get me wrong, they're still fast for sequential I/O, but actually they don't outpace a normal hard disk by the same margin. However, it's a slightly moot point because in a busy system, many sequential I/O requests at different points on the disk effectively translate into random I/O.