I tried to find an answer on google for this one but decided to ask the community for a good answer. I would be grateful if someone could tell me or point me to and article on the net.
If you have worked on an application and are satisfied that the correct indexes are in place, stats are up to date partitioning has been applied if necessary.
What definitive diagnostic tests can be applied to deteremine if a database is slow because the cpu is too slow or you need to add more memory to the system?
asked Sep 03 '11 at 03:19 AM in Default
half fast dba
Although Brent's blitz script is a good one, most of it is about checking sp_configure, database backups, if DBCC has run, alerts to setup, etc. The next to the last script checks the memory usage per database but that is about it, at least last time I looked at it.
The only real definitive diagnostic tests (IMOH) you can do is collecting performance information, from either within SQL Server or using Perfmon jobs (or both). If you collect your baseline over a few days, or even weeks depending on your business, you can analyze that to know what your "normal" is. Then when things start to go every-which-way but good, you collect that performance information again you will be able to see where things may have changed.
If you don't have a baseline to go against, you don't have to guess necessarily, just collect the information and see what areas seem to have high/low numbers (depending on what is a suggested number). However remember that the suggested numbers, are just suggestions. Some counters show different numbers in different environments, and it is perfectly fine for that environment. You just have to get familar with them in your environment. You might find from this that the CPU and RAM are fine in your server, the developers just need to tweak a query here and there, or adjust an index on a table.
Brent does have an excellent video on SQL Performance counters and how to make them look pretty in an Excel worksheet. You can check his performance page for other stuff here.
A good book to look into as well is Glenn Berry's SQL Server Hardware. It is on my wish list to buy and has some good explanations about what effects adding more memory or CPU can have, among other things. Brent actually left a review on it and makes me want to trade my kid in so I can buy it, jk.
answered Sep 03 '11 at 09:35 AM
The quickest way would be to go to www.brentozar.com and get his 60 minute SQL Server take over script. It has comments in the script an logically goes through testing various metrics and will point you towards the weak point(s) on your server http://www.brentozar.com/sql/blitz-minute-sql-server-takeovers/
answered Sep 03 '11 at 04:24 AM
I'd short circuit even these excellent suggestions. Go to the sys.dm_os_wait_stats and see what is causing the system to wait on resources the most. The top 5 longest waits there will tell you roughly where your system is stuck. Then you have to determine if it is in fact hardware, structure or code.
Yes, the best thing you can do is set up monitoring through perfmon.exe and a server-side trace, collect the data and be able to compare it over time so that you can see how the CPU, memory and disk are behaving. But if you don't have that set up (get started now), the wait stats are the single best measure to determine what is causing things to run slow.
answered Sep 04 '11 at 04:48 AM
Grant Fritchey ♦♦