x

What is the best data type to use for large text?

I have a table (work_order) that has a column (long_description) with a data type of text. We have found the largest value stored in the column is 818 meg. When the application that accesses the data queries the table for the record that contains the large text in the column, the response time is extremely slow.

We tried changing the data type to varchar(max) and turning ON the 'large value types out of row' option and the response time is still slow.

Need help to determine how to allow for the storage of the large text and improve response time.

Any suggestions would be greatly appreciated.

Thanks!
more ▼

asked Aug 08, 2012 at 03:28 PM in Default

clayp56 gravatar image

clayp56
71 9 9 9

Thanks for the replies. I was wondering how to approach this using the suggestions. Should we try compression first? Would that be page or row compression?

Also, what is the method used to store a pointer to data stored on external files?

thanks,
Aug 08, 2012 at 05:22 PM clayp56
Compression will help you to with speed of retrieval of data from IO subsystem by the SQL Server, however will not help you with the network transport layer. The Bigger slowdown in your scenario is for sure the network.
Aug 11, 2012 at 08:31 AM Pavel Pawlowski
(comments are locked)
10|1200 characters needed characters left

4 answers: sort voted first

Varchar(max) is most certainlyt he best data type to store large text data in the newer versions of SQL Server.

The first question I would ask is if you truly need to store that much text in the database. For truly large amounts, you could store just a pointer to external files and then call up the files, perhaps even on another server so it is not tying up the database.

Aside from that, I would look at traditional optimization techniques. Is your code as effecient as possible? Tied in to that, are you avoiding RBAR operations (in other words, are you using set based code)? Do you have indexes that support your normal query patterns?

And while I tend to look at the hardware last, upgrading the hardware sometimes makes sense especially if it is notably out of date or if the demands have grown substantially since the original provisioning was done.
more ▼

answered Aug 08, 2012 at 04:08 PM

TimothyAWiseman gravatar image

TimothyAWiseman
15.5k 20 23 32

(comments are locked)
10|1200 characters needed characters left

If you query the table and do not need the large value immediately (eg. providing list on the client side), then query the rows without the BIG column.

Then if you need the big value on client side, you can query it separately.

The issue with response is, that once you have the BIG column in your query and client is fetching the row containing such big data, all the data are being read by the client and it takes long.

Other possibility is to fetch only a few first characters for preview using eg. LEFT(BIGCOLUMN, 80) and if more data is necessary then query it separately at the moment when they are needed.

more ▼

answered Aug 08, 2012 at 08:11 PM

Pavel Pawlowski gravatar image

Pavel Pawlowski
22.3k 9 11 21

Only to add. If you have 818 MB of data in a column, than even on 1 Gbps connection to the server the maximum theoretical speed is 128 MB/s. So it will take 8 seconds to retrieve only data for that column. But this is really maximum theoretical speed which you will never receive on 1 Gbps connection.
Aug 08, 2012 at 08:14 PM Pavel Pawlowski
(comments are locked)
10|1200 characters needed characters left

Others have touched on it but just to make it clear, changing the datatype wont affect the speed of the query - you are asking for data and that amount of data will have to be shipped from the disk on the server to the client making the request. Whether it is VARCHAR(8000)/NVARCHAR(MAX) or what ever, 818MB is still 818MB.

Compression will speed up getting the data off the disk but that is only helping disk access, the uncompressed data will have to travel over your network and obey the physical limitations that apply there.

to speed up the query, request less data.

Big columns in your database are making your backups slower and taking up space on your tape. if they are actual files then certainly shifting that content to a file server and simply using the database for storing the metadata of where the file stored will improve things in many ways.
more ▼

answered Aug 09, 2012 at 09:08 AM

Fatherjack gravatar image

Fatherjack ♦♦
42.3k 75 78 108

(comments are locked)
10|1200 characters needed characters left
Adding to @TimothyAWiseman answer, you can consider data compression, this helps you both in terms of performance and data storage, but this is an enterprise feature available in SQL Server 2008 and above.
more ▼

answered Aug 08, 2012 at 04:22 PM

Cyborg gravatar image

Cyborg
10.6k 36 40 45

(comments are locked)
10|1200 characters needed characters left
Your answer
toggle preview:

Up to 2 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

New code box

There's a new way to format code on the site - the red speech bubble logo will automatically format T-SQL for you. The original code box is still there for XML, etc. More details here.

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

SQL Server Central

Need long-form SQL discussion? SQLserverCentral.com is the place.

Topics:

x5

asked: Aug 08, 2012 at 03:28 PM

Seen: 3343 times

Last Updated: Aug 11, 2012 at 08:31 AM