Hi all, sooner or later everyone of us has to deal with a proper SQL Server hardware sizing, e.g. someone comes to your desk and tells you (well, kind of): "We will start a new project. We do not really know how much data we expect, how many users we will have but we need a new SQL Server. What kind of hardware do we need? How much disk space, how much RAM? We have to do an estimation regarding the costs." So one of my questions is: If you have to deal with such a request and it will be no known application (like SAP, Sharepoint and so on) but an application which does not even exist yet, how do you proceed? What kind of information are you trying to get? And the most important thing is: Once you got some information, let's say how many users you will have (concurrent and/or random access), how much data will be there in 1 month, in two months etc pp... how do you interpret these information so that you know what kind of system you will need? In my case I have to answer the "What do we need?" question for a system that will receive "a lot of data" (no further information available) once the new application is available and rolled out. Then there will be many clients sending data, which need to be stored in a database. Since it isn't rolled out yet, there's only a simulation application right now which covers a little percentage. I am now capturing some performance counters (with the help of a PAL template) as well as sql server wait stats and virtual file stats so that I have the first metrics available and then I will try to score these. Regards Dirk
Very real world problem. What I have been able to do is break down the cost of SQL Server license per core (based on 2012 EE with SA), how much my storage cost are per GB for Tier1 (data, logs, tempdb) and Tier3 (backups) and then how much my typical server cost. I am fortunate that 90% of my SQL Servers are on VM's so it makes it much easier, however their cost simply go up if they are demanding physical. Having a fixed cost per core for license, a fixed cost per GB for storage, and a cost per core for physical hardware or VM, it allows me to put the cost back on the business unit or project manager hitting me up for an estimate. Example if a 2 core pack for 2012 EE with SA cost my organization 16k then 8k per core. Tier 1 storage cost range between 10 - 15k per 1 TB so do the math for your cost. at the lower end it would be $10 per GB Keep in mind that is for data+logs+tempdb. A decent dual 10core server with 512 GB of ram cost around 20k so 2k per core. With all that data in hand I tell them once they have their specs, then they can estimate their actual cost. When they say they don't know, then I tell them they can come up with their own SWAG (Scientific Wild A$$ Guess) just like I can, but at least they have actual numbers to use to get a very close cost. Hope this helps.
See if they've got a database schema. Ask them about which tables are static, which ones are more transactional - likely to be updated and growing. Ask them how many rows each of those is likely to be (finger in the air) per transaction. You can (slowly and painfully) work out what sort of size the database is likely to be for 1000 transactions, and for 100,000. Also discuss with The Business what sort of growth rate they're expecting - are they looking at linear growth of the data - always, say, 500 transactions per day - or are they looking to grow it - starting off with 500 transaction, and doubling every six months? That should be enough to give you a rough handle on the data growth, and thus on the size and throughput rate of storage required.