I'm new with the database,I saw interesting questions in this webssite about someone need to prepare the storage for new server,also I need to do the same plan ? but I'm new with this field .
The qusation is:-
1-how much data is to be transferred from disk to users per second ? - how we can calculated by TSQL or what ? - And If we calculated how will effect our choice in storage or the network?
plz, can some expert explain that in detailes.Regards
asked Jul 08 '12 at 08:06 AM in Default
In order to know how much data is being moved, you need to understand what is being stored. This means specifically which types of data, how many columns, of what size. Then you need to understand how much of it will be retrieved with any given query. Finally, you need to understand how often the queries will be called. With all that information you can determine how much information transfer you'll be seeing off of your disk sub-system.
In general, get as fast a set of disks as you can afford. With a well-designed system the disk I/O is usually the slowest part. You'll also need to look at distributing your storage across multiple disks. Assuming you need some protection for your data you'll also want to look at some type of RAID or SAN storage.All this is extremely involved. I'm not going to be able to give you every single aspect of everything you need to know in a simple Q&A forum like this. But what I've posted above should get you started. If this is a large project, I'd suggest hiring a consultant to help out.
answered Jul 08 '12 at 09:59 AM
Grant Fritchey ♦♦
In order to assist you we need some information.
Do you know how many users the system may support?
For small databases - say smaller than 10 GB, most enterprise class server hardware will be sufficient. Using RAID disks will be better still. Attach that to a SAN and then it's pretty well a given that your transfer rates will be good.
Unfortunately the disk transfer rate is only one small part of the whole.
Can the rest of the I/O subsystem handle the necessary throughput?
The bottom line is that nothing is examined in isolation.
answered Jul 09 '12 at 01:57 PM
I'm not sure I understand how index fragmentation relates to the rest of your questions. Sorry for my lack of understanding.
How many users can a system support? You have to collect a bunch of different metrics to understand the answer to a question like this. Two chapters of the book I wrote are all about collecting performance metrics. I'd suggest reading those chapters. You can estimate how many users you can support through two means, measuring with the users you have now and then extrapolating from there. If with 20 users we're using 20% CPU and disk latency is low and with 30 users we're using 30% CPU but disk latency is hight, if we wanted to go to 80 users we can expect 80% cpu, but the disk might be problematic... like that.
OLTP means higher transactions, lots of writes, but you'll still see more reads than writes.
How much RAM related to DB size... doesn't work like that. You can have a 300gb database but only 16gb of ram but still have plenty of memory. Why? because the majority of the data is not used. There is no correlation between database size & ram size. It depends on how much of the data is getting accessed regularly enough that you're able to see reuse of the data in cache or you're constantly hitting the disks. Again, this requires detailed measurements of your system. There's no one magic query that will give you all the information.
My book is all about query tuning and the basis of query tuning is measuring performance so that you understand what's running slow on your system so that you're tuning the right queries. My book is called SQL Server 2012 Query Performance Tuning and is available on Amazon, Barnes & Noble or from Apress.
Another book I recommend is the Accidental DBAs Guide to Troubleshooting. You can download a free e-book copy of this or you can purchase the print copy from Amazon.Both books have extensive sections on performance measurement.
answered Jul 16 '12 at 08:56 AM
Grant Fritchey ♦♦
Thanks to everyone for answering, but I didn't get what I need.
Let me explain what I mean, When we dealing with index to discover the fragmentation and we want to
REBUILD or REORGANIZE to do that we have a measure to do that: " If the amount of fragmentation on your index is less than 10 percent, you really don't need to do anything.
" When it's from 10 percent to 30 percent, you should reorganize your index
" and anything higher requires a rebuild. By getting the fragmentation from AVG_FRAGMENTATION_IN_PERCENT in sys.dm_db_index_physical_stats .
So really its good questions that I found here to ask also I prepare like kind of these question but how will help me like?
Q)Do you know how many users the system may support? Answer) If we say 20 how it will be help. If we say 300 how it will be help.
Q) Will the system be a heavily updated read or write system? OLTP or OLAP? Answer) we have heavy OLTP.
Q) How much RAM will the system have, especially in relation to the dB size? What is the mechanism to calculate that?
I read many books and many advice me some books and most then talk in general and I couldn't find some books to give me the mechanism to calculate: like if you have a large DB you should have a Ram for example a 4G if your DB is 20G(this an example). (this question from Greg).
Is There a book to help the DBA to answer the quotation To be more specific HW?
answered Jul 13 '12 at 09:03 AM