FILLFACTOR and SSD storage

So I've been thinking about using FILLFACTOR when the storage is entirely on SSD drives. Is the cost-vs-benefit worth it to have a 70%-90% fillfactor to prevent fragmentation, when seek times are <0.1ms?

Perhaps it makes sense to keep FILLFACTOR for clustered indexes, but what about non-clustered indexes? My coworker firmly believes that the extra space used for this option is well worth it, even on SSDs.

The DB in question is used for OLTP and has 70/30 read/write ratio. What are your thoughts, SSC?
more ▼

asked Oct 18, 2010 at 08:34 AM in Default

Weprin gravatar image

63 1 1 2

(comments are locked)
10|1200 characters needed characters left

4 answers: sort newest

Seeks are an expensive operation, relatively speaking, regardless of the medium. OLTP is already requiring seeks to perform it's job - why introduce easily avoided ones? Why have data shifted around due to page splits unnecessarily?

I would still tend to work with a FILLFACTOR.

Only testing can truly determine if it is of a concern in your environment.


---------- Another factor to consider in this is that the pages from disk are brought into RAM where they would still be fragmented and you continue adding unnecessary resource utilization.
more ▼

answered Oct 18, 2010 at 08:55 AM

Blackhawk-17 gravatar image

11.9k 28 31 37

15s! Fifteen seconds! damn, I shouldnt have typed so much !! ;)
Oct 18, 2010 at 08:56 AM Fatherjack ♦♦
(comments are locked)
10|1200 characters needed characters left

+1 to you, Blackhawk and Fatherjack, but I just wanted to add some points.

  1. Typically, SSDs are more asymmetric than HDDs. What do I mean by that? I mean that the difference in performance between reads and writes is greater, typically, on SSDs than it is on HDDs. Therefore, writes are relatively more expensive on SSDs. So, writes are to be avoided more. Therefore, try and avoid page splits - as these usually involve a fair bit of writing.
  2. I wanted to pick up on something that you said that 'perhaps you should keep it for a clustered index'. And I want to ask why? Because, a rule of thumb that has served me well is that a clustered index should be unique, as narrow as possible, and ever-increasing. In fact, it's Oleg who describes it like that, but I have lived by the same mantra. Using a clustered index like that eradicates the need for FILLFACTOR on a clustered index, because you know that you are only ever writing to the end of the logical page sequence. So, using a fill factor would just increase the number of pages involved in a scan. Now, it may be that for some tables you have a clustered index that is different, for whatever reason (and perfectly valid they may be). In that case, use a fill factor. But, my main point is that it's entirely likely that the main benificiary from use of fill factor would be non-clustered indexes, where data is naturally inserted in the middle of the page sequence.
3. Don't mix up random and sequential I/O. SSDs absolutely blaze random I/O (seeks) but won't show as much of a benefit on sequential I/O (scans). Don't get me wrong, they're still fast for sequential I/O, but actually they don't outpace a normal hard disk by the same margin. However, it's a slightly moot point because in a busy system, many sequential I/O requests at different points on the disk effectively translate into random I/O.
more ▼

answered Oct 18, 2010 at 11:53 AM

Matt Whitfield gravatar image

Matt Whitfield ♦♦
29.5k 61 65 87

+1 for your answer. A page split is always a page split and will cause IO writes. And I want to clarify that using an ever increasing clustered index is not always the best, as always "it depends". In a OLTP solution with many inserts you may end up with hotspots on the last page. Read the following pages carefully before you choose the clustered index http://www.sql-server-performance.com/tips/clustered_indexes_p1.aspx
Oct 19, 2010 at 12:28 AM Håkan Winther
(comments are locked)
10|1200 characters needed characters left

edit-> This is more about using defragging tools in general than FILLFACTOR per se, but to an extent, that's what FILLFACTOR is about.

Fatherjack and Blackhawk might be right, but there is one disadvantage of defragging an SSD - it can shorten the disk's life. So maybe if performance is this drive's most important function and it proves to help the performance, then you might go for it.

But, maybe more importantly, Intel does not recommend using defrag tools.

Do I need to defragment my Intel® Mainstream Solid-State Drives (using Windows* Disk Defragmenter* or similar program)? No. SSD devices, unlike traditional HDDs, see no performance benefit from traditional HDD defragmentation tools. Using these tools simply adds unnecessary wear to the SSD. It is recommended that you disable any automatic or scheduled defragmentation utilities for your Intel SSD.

Also, are you using TRIM? If you can use TRIM, the gains may not be worth it according to [this site][2].

[2]: http://www.pcper.com/article.php?aid=872&type=expertπd=10
more ▼

answered Oct 18, 2010 at 11:24 AM

Mark gravatar image

2.6k 23 25 27

Index defragmentation and O/S level defrag are different. The big thing is to get non-fragmented pages into RAM. Again a proper FILLFACTOR will allow for a multitude of inserts prior to a split. The secret is to determine future growth against the randomness of the data over the row size. SSD devices may lead to new Design Best Practices where we run with 10-20% FILLFACTORS anticipating growth, paying the cost up front in extra storage, rather than re-writing the same blocks and paying for it in MTBF.
Oct 18, 2010 at 11:32 AM Blackhawk-17
To avoid O/S level defragmentation pre-size your files and don't shrink or grow them.
Oct 18, 2010 at 11:35 AM Blackhawk-17
@Mark this is the essence of the question - setting the right fill factor will prevent the indexes becoming fragmented so quickly on normal disks you then have to do defrags in order to maintain acceptable iops. @Weprin is asking if it matters as much on SSD as they effect of the fragmentation will take longer to evidence itself due to the performance you can get from the media.
Oct 18, 2010 at 11:35 AM Fatherjack ♦♦
@Blackhawk, this gets complicated fast doesn't it? So if you have an index that isn't sequentially growing, but changing in the same place, it would wear out that part of the SSD? Or, maybe TRIM (or a similar tech) helps prevent that somehow - I'm not sure.
Oct 18, 2010 at 11:38 AM Mark
@Mark - those would be most likely be updates, and theoretically, they wouldn't cause defragmentation (unless row sizes increased, etc.) so FILLFACTOR is not relevant to that discussion. Complicated? Yup - proactive maintenance/monitoring will become even more important if SSD tech is as wear-prone as early results seem to suggest.
Oct 18, 2010 at 11:43 AM Blackhawk-17
(comments are locked)
10|1200 characters needed characters left

I would say that best practice is best practice.

Can you guarantee that the database will always be on SSD? Does your business continuity hardware have SSD too?

My 2c says build it like its a normal database so use all settings as normal. There are plenty of examples of programmers getting sloppy as CPU performance sped up and then getting caught out as the needs of the system showed up their work when the system was even more critical to the business. Dont let your database settings show you up like that.

If you have SSD then the system must be expected to be high end iops so any saving is a saving that other transactions can use.
more ▼

answered Oct 18, 2010 at 08:55 AM

Fatherjack gravatar image

Fatherjack ♦♦
42.8k 75 79 108

Good points; of course they took longer to type :)
Oct 18, 2010 at 08:58 AM Blackhawk-17
I liked both answers.
Oct 18, 2010 at 10:06 AM Grant Fritchey ♦♦

As with many businesses these days, nothing is written in stone. For now, we're on SSD. But in five years, who knows what we will use?

And you are correct, our business continuity hardware is not using SSDs.
Oct 18, 2010 at 12:43 PM Weprin
(comments are locked)
10|1200 characters needed characters left
Your answer
toggle preview:

Up to 2 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

SQL Server Central

Need long-form SQL discussion? SQLserverCentral.com is the place.



asked: Oct 18, 2010 at 08:34 AM

Seen: 2661 times

Last Updated: Oct 18, 2010 at 08:34 AM