I know the conventional wisdom is to create Temp dB with the same number of equally sized files as CPU cores.
Ideally I would assume that having these files spread across multiple disks, separate from any dBs, would be the optimum design.
What if I don't have the number of drives to support that?
Will multiple Temp dB files still aide performance if they all reside on the same disk, or on fewer disks than the number of files?
What if they have to share disk resources with the data files? Are multiple Temp dB files still a valid approach?
asked Jan 28 '10 at 11:58 AM in Default
To get the best performance out of tempdb, you really do need to seperate tempdb storage from the data storage. That much is a must. After that, it's just methods of tuning, not so much requirements. Multiple files will probably not help much, if at all, when you you're not spreading them across multiple drives.
answered Jan 28 '10 at 12:20 PM
Grant Fritchey ♦♦
Def with Grant on this one... Even in a medium environment I make provisions for tempdb on seperate storage, even more so when using XML data type... Our dev's seem to be using these alot for our translation business.
KevinK also has a good article around tempdb.. A good read!
answered Jan 28 '10 at 12:39 PM
I have seen demos that show that adding tempDBs causes performance drops in some instances, my advice is test, test and test all possible/plausible configurations. I think microsoft still recommend >1 in certain circumstances.
Are you installing/speccing for a virtualised environment?
My preference for a small/med hardware spec:
answered Jan 28 '10 at 12:22 PM