question

SQLServerGajjar avatar image
SQLServerGajjar asked

KeyLookup in massive columns in sql server

I have one simple query which has multiple columns (more than 1000). When i run with single column it gives me result in 2 seconds with proper index seek, logical read, cpu and every thing is under thresholds. But when i select more than 1000 columns it takes 11 mins for the result and gives me key lookup. You folks have you faced this type of issue? Any suggestion on that issue?
sql-server-2012performance-tuningoptimization
1 comment
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Is the Query selecting from a single table with more than 1000 columns (i hope not)? If not, how are they defined? Please add some more information about how your Query is constructed.
3 Likes 3 ·

1 Answer

· Write an Answer
Grant Fritchey avatar image
Grant Fritchey answered
If you're searching only on the non-clustered index key column, then the non-clustered index is a covering index, meaning the index contains all the columns needed to satisfy the query. When you have to select any number of columns that are not a part of the non-clustered index, either at the key or page level, then a lookup is necessary. You have a few choices that can help. 1. Change this index from non-clustered to clustered. Then it won't have to go to the heap or the existing clustered index to get the data since the data will be stored with it. 2. You could add the columns you need in the SELECT to the INCLUDE part of the index definition so that they get stored at the leaf level. That will make the non-clustered index covering again. However, with 1000 columns, this could be a nightmare since it's going to radically increase your storage. 3. You could normalize your database. 1000 columns is almost absolutely an indication that you've denormalized, or never normalized, your structures. Relational Database Management Systems (RDBMS) such as SQL Server are built to support relationships through enforced referential integrity. They work better that way. In a properly normalized structure, performance is usually better. 4. You could identify the columns that are actually going to be used regularly (no one looks at 1000 columns worth of data, there's a subset of the information that is used) modify the query and the index to support that sub-set. The real issue is the structure and your method of access. This isn't the best way to work with relational storage. If all you need is ID/Value storage, you should go to a NoSQL solution such as DocumentDB.
10 |1200 characters needed characters left characters exceeded

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.