I have a table (work_order) that has a column (long_description) with a data type of text. We have found the largest value stored in the column is 818 meg. When the application that accesses the data queries the table for the record that contains the large text in the column, the response time is extremely slow.
We tried changing the data type to varchar(max) and turning ON the 'large value types out of row' option and the response time is still slow.
Need help to determine how to allow for the storage of the large text and improve response time.
Any suggestions would be greatly appreciated.Thanks!
asked Aug 08, 2012 at 03:28 PM in Default
Varchar(max) is most certainlyt he best data type to store large text data in the newer versions of SQL Server.
The first question I would ask is if you truly need to store that much text in the database. For truly large amounts, you could store just a pointer to external files and then call up the files, perhaps even on another server so it is not tying up the database.
Aside from that, I would look at traditional optimization techniques. Is your code as effecient as possible? Tied in to that, are you avoiding RBAR operations (in other words, are you using set based code)? Do you have indexes that support your normal query patterns?And while I tend to look at the hardware last, upgrading the hardware sometimes makes sense especially if it is notably out of date or if the demands have grown substantially since the original provisioning was done.
answered Aug 08, 2012 at 04:08 PM
If you query the table and do not need the large value immediately (eg. providing list on the client side), then query the rows without the BIG column.
Then if you need the big value on client side, you can query it separately.
The issue with response is, that once you have the BIG column in your query and client is fetching the row containing such big data, all the data are being read by the client and it takes long.
Other possibility is to fetch only a few first characters for preview using eg. LEFT(BIGCOLUMN, 80) and if more data is necessary then query it separately at the moment when they are needed.
answered Aug 08, 2012 at 08:11 PM
Others have touched on it but just to make it clear, changing the datatype wont affect the speed of the query - you are asking for data and that amount of data will have to be shipped from the disk on the server to the client making the request. Whether it is VARCHAR(8000)/NVARCHAR(MAX) or what ever, 818MB is still 818MB.
Compression will speed up getting the data off the disk but that is only helping disk access, the uncompressed data will have to travel over your network and obey the physical limitations that apply there.
to speed up the query, request less data.Big columns in your database are making your backups slower and taking up space on your tape. if they are actual files then certainly shifting that content to a file server and simply using the database for storing the metadata of where the file stored will improve things in many ways.
answered Aug 09, 2012 at 09:08 AM