All, how is capacity planning done in a sql server 2008 database server. in our environemnt, the maximum data per year is is nearly 15 million records per table, and there are around 7 main tables that are supposed to hold that kind of data. The applicaiton is mainly, inserting data and also performing reporting in it. Database needs to be available round the clock. Current configuration, which has a database, with tables and the main 5 tables with a million records each. the machine is 3 gb ram and 1tb drive space and processor is quadcore amd opteron processor with 2.59 ghz speed. . . Right now, the CPU is hitting 100 every 5 mins with the current infrastructure I Wonder what kind of RAM and space and processor is needed, so that i dont have future hassles at least with the hardware. is there a white paper or any documentation to perform the capacity planning for sql server?? thank you.
You mention a few things in your question that need to be investigated a little more: 7 tables, with a yearly 15 Million row increase per table. - Is this a fact, or a guess? - What do the tables look like? - Do they have variable length columns, XML data, LOB columns, NULLable columns? Current hardware is 3GB RAM and one quad-core opteron. - What server is this exactly (make and model)? This will allow for a better understanding of what you can do with the current hardware - 1TB disk space. This doesn't tell us what the storage is. Is it a single disk, or a RAID array? Local storage or a SAN/NAS solution? Are you looking to replace this hardware (e.g. it is too old) or want to know if this is enough for the job? CPU spiking at 100% every 5 minutes. - Have you looked at what is running on the machine? You need to see what queries/jobs are running and if they are the cause for the CPU spikes. - Do you have other services running on the box except for SQL Server that may be causing these spikes? A pretty crude rule-of-thumb, get lots of physical cores on as few CPUs as possible (quad-core is a good start). Throw lots of RAM at the server (RAM is obscenely cheap nowadays) as that is the fastest storage you can get and SQL Server uses it to cache data. The hard disk storage **must** be a RAID solution (be it locally attached or in a SAN/NAS). You mentioned high availability - this stuff can get very expensive. Make sure that your management team defines high availability **and** disaster recovery properly. You can then research what you need to implement in order to achieve their goals - you are best off setting up three possibilities (cheap, middle, expensive) and letting them choose how much they want to spend. I think that should give you something to start with.
In addition to the most excellent suggestions from m'learned colleague @WilliamD, you don't mention the OS version and SQL Server version - it may be that the RAM / CPU upgrades that @WilliamD suggests won't have the desired effect due to you running Standard edition SQL Server 2008 on Windows 2008 Standard edition. Similarly 32-bit versions generally have lower upper limits than 64-bit versions. There is a list at that may be of use.
3gb of RAM seems pretty anemic for a system like that. The real issue though is not simply guessing at hardware. You need to understand, in detail, how your system is behaving. What are the principal wait states. Is there queuing at the disk, memory, or cpu? What's the average page life expectancy (I'd be surprised if it was much higher than 10). In order to do capacity planning you need to know what you're using now and why. You could simply toss money at the issue. Lots of companies do. But then you'll be sitting there looking at your four quad core cpu's asleep while performance still stinks because you've had a memory bottleneck all along (speculation). You need to work from a point of knowledge. Then, once you understand where you're pain points and bottlenecks are, you can appropriately throw hardware at the issue.
Every suggestion and observation made so far is excellent, but I may be able to add a couple of things. Generally RAM has more of an impact on SQL performance than does the processor. That of course is a very general rule of thumb and the processor can be a bottleneck, but it is a useful rule of thumb and when in doubt I would add more RAM to an underperforming system. And while I know it does not directly address your question, I would look closely at your indexing scheme because it can affect a lot of other things. A good index scheme can dramatically improve select performance and even, in the right circumstances, reduce the strain on the processor and other systems. But indexes also take up more space and can quickly devour harddrives, so they need to be factored in when allocating storage space.