x

How much data is to be transferred from disk ?

Dear Experts,

I'm new with the database,I saw interesting questions in this webssite about someone need to prepare the storage for new server,also I need to do the same plan ? but I'm new with this field .

The qusation is:-

1-how much data is to be transferred from disk to users per second ? - how we can calculated by TSQL or what ? - And If we calculated how will effect our choice in storage or the network?

plz, can some expert explain that in detailes.

Regards
more ▼

asked Jul 08, 2012 at 08:06 AM in Default

Troy_2012 gravatar image

Troy_2012
750 23 35 39

(comments are locked)
10|1200 characters needed characters left

4 answers: sort voted first

In order to know how much data is being moved, you need to understand what is being stored. This means specifically which types of data, how many columns, of what size. Then you need to understand how much of it will be retrieved with any given query. Finally, you need to understand how often the queries will be called. With all that information you can determine how much information transfer you'll be seeing off of your disk sub-system.

In general, get as fast a set of disks as you can afford. With a well-designed system the disk I/O is usually the slowest part. You'll also need to look at distributing your storage across multiple disks. Assuming you need some protection for your data you'll also want to look at some type of RAID or SAN storage.

All this is extremely involved. I'm not going to be able to give you every single aspect of everything you need to know in a simple Q&A forum like this. But what I've posted above should get you started. If this is a large project, I'd suggest hiring a consultant to help out.
more ▼

answered Jul 08, 2012 at 09:59 AM

Grant Fritchey gravatar image

Grant Fritchey ♦♦
98.8k 19 21 74

So,is there any query to help the DBA to estimate that? or Not
Jul 08, 2012 at 10:27 AM Troy_2012
No, not really. It's a whole series of pieces of information that you'd have to gather. I'm assuming you're talking new development. If you just want to measure the load of an existing system, you can use the Performance Monitor Counters. For straight load you can look at Disk Transfers/sec. For general performance you can look at Avg. Disk Sec/Read and Avg. Disk Sec/Write.
Jul 08, 2012 at 10:45 AM Grant Fritchey ♦♦
I'm so sorry if I asked many qusestion. how will this result can help me ? this is I can't understand could you plz gime an example ,like (if the Avg. Disk Sec/Read =30 ) how this will help me,Is there a measure for that.
Jul 08, 2012 at 11:11 AM Troy_2012

It really depends on what you're trying to do. That measure is the average time in milliseconds to read or write from your disk. You're asking how to measure performance. Well, there you go. If the average read time is 30ms (very slow by the way) on the old server and it's 3ms on the new server, the new server is faster.

Generally, when purchasing servers & disks, you go with what the manufacturer says is the transfer rate. Compare that to the transfer rate of your existing disks.
Jul 08, 2012 at 11:22 AM Grant Fritchey ♦♦
(comments are locked)
10|1200 characters needed characters left

In order to assist you we need some information.

Do you know how many users the system may support?
What are the concurrency requirements?
Do you know how many tables the databases will use?
Have you been told the data types being stored?
How many databases will there be on an instance?
What block size will be best for the particular drives?
Is mirroring or replication in your plans?
Does your budget enable you to purchase multiple spindles for various sub elements of SQL Server such as separating data files, transaction logs and temp dB?
Will the system be a heavily updated read or write system? OLTP or OLAP?

For small databases - say smaller than 10 GB, most enterprise class server hardware will be sufficient. Using RAID disks will be better still. Attach that to a SAN and then it's pretty well a given that your transfer rates will be good.

Unfortunately the disk transfer rate is only one small part of the whole.

Can the rest of the I/O subsystem handle the necessary throughput?
Do you have the CPU power to handle requests?
How much RAM will the system have, especially in relation to the dB size.
How often will data be removed from the buffer cache? If not often then your transfer speed is not as critical after the system has been running a while.
What connectivity do Clients have? WAN? LAN?
Is this an N-Tier application with services on other servers or are the other applications running on the same system?

The bottom line is that nothing is examined in isolation.
Of course if you get 10ms data file reads and 2 ms Transaction Log writes you're probably sitting pretty for most applications.

So... it depends on your requirements, budget and definition of acceptable performance.
more ▼

answered Jul 09, 2012 at 01:57 PM

Blackhawk-17 gravatar image

Blackhawk-17
11.9k 28 31 36

(comments are locked)
10|1200 characters needed characters left

I'm not sure I understand how index fragmentation relates to the rest of your questions. Sorry for my lack of understanding.

How many users can a system support? You have to collect a bunch of different metrics to understand the answer to a question like this. Two chapters of the book I wrote are all about collecting performance metrics. I'd suggest reading those chapters. You can estimate how many users you can support through two means, measuring with the users you have now and then extrapolating from there. If with 20 users we're using 20% CPU and disk latency is low and with 30 users we're using 30% CPU but disk latency is hight, if we wanted to go to 80 users we can expect 80% cpu, but the disk might be problematic... like that.

OLTP means higher transactions, lots of writes, but you'll still see more reads than writes.

How much RAM related to DB size... doesn't work like that. You can have a 300gb database but only 16gb of ram but still have plenty of memory. Why? because the majority of the data is not used. There is no correlation between database size & ram size. It depends on how much of the data is getting accessed regularly enough that you're able to see reuse of the data in cache or you're constantly hitting the disks. Again, this requires detailed measurements of your system. There's no one magic query that will give you all the information.

My book is all about query tuning and the basis of query tuning is measuring performance so that you understand what's running slow on your system so that you're tuning the right queries. My book is called SQL Server 2012 Query Performance Tuning and is available on Amazon, Barnes & Noble or from Apress.

Another book I recommend is the Accidental DBAs Guide to Troubleshooting. You can download a free e-book copy of this or you can purchase the print copy from Amazon.

Both books have extensive sections on performance measurement.
more ▼

answered Jul 16, 2012 at 08:56 AM

Grant Fritchey gravatar image

Grant Fritchey ♦♦
98.8k 19 21 74

(comments are locked)
10|1200 characters needed characters left

Thanks to everyone for answering, but I didn't get what I need.

Let me explain what I mean, When we dealing with index to discover the fragmentation and we want to

REBUILD or REORGANIZE to do that we have a measure to do that: " If the amount of fragmentation on your index is less than 10 percent, you really don't need to do anything.

" When it's from 10 percent to 30 percent, you should reorganize your index

" and anything higher requires a rebuild. By getting the fragmentation from AVG_FRAGMENTATION_IN_PERCENT in sys.dm_db_index_physical_stats .

So really its good questions that I found here to ask also I prepare like kind of these question but how will help me like?

Q)Do you know how many users the system may support? Answer) If we say 20 how it will be help. If we say 300 how it will be help.

Q) Will the system be a heavily updated read or write system? OLTP or OLAP? Answer) we have heavy OLTP.

Q) How much RAM will the system have, especially in relation to the dB size? What is the mechanism to calculate that?

I read many books and many advice me some books and most then talk in general and I couldn't find some books to give me the mechanism to calculate: like if you have a large DB you should have a Ram for example a 4G if your DB is 20G(this an example). (this question from Greg).

Is There a book to help the DBA to answer the quotation To be more specific HW?

more ▼

answered Jul 13, 2012 at 09:03 AM

Troy_2012 gravatar image

Troy_2012
750 23 35 39

(comments are locked)
10|1200 characters needed characters left
Your answer
toggle preview:

Up to 2 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

New code box

There's a new way to format code on the site - the red speech bubble logo will automatically format T-SQL for you. The original code box is still there for XML, etc. More details here.

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

SQL Server Central

Need long-form SQL discussion? SQLserverCentral.com is the place.

Topics:

x32
x30
x13

asked: Jul 08, 2012 at 08:06 AM

Seen: 1305 times

Last Updated: Jul 16, 2012 at 08:56 AM