I have a query that is something like the below snippet
This SQL Snippet takes 33 seconds to execute. However when modified to using temporary table, it takes just 4 seconds to complete. Here is the modified snippet
Please note that the Queue table has around 11 million rows. My understanding was if we are selecting only small amount of data (10 in this case), table variables are the way to go. However this reversal of performance was a surprise to me. Wondering if anyone can shed some light on it.
Thanks for your all time in advance.
This is likely to be due to the fact that inserting into a table variable prevents parallelism from occurring in most circumstances. Have a look at the differences in the execution plans generated for the two statements, and you'll probably find that's why you're suffering.
However, as Oleg commented, it's pretty likely that you're missing an index somewhere because selecting 10 rows from such a small table should be pretty much instant, in which case the difference between having parallelism or not would be negated by the fact that you would be using index seeks rather than scans.
answered Sep 21, 2010 at 12:55 AM
Matt Whitfield ♦♦
My understanding is #temp tables are created in tempdb whereas a table variable is stored in memory with some tempdb I/O. With 11 million rows you are probably taxing your memory and causing it to page out whereas SQL is able to handle dumping 11M rows to disk much more efficently.
Another thing that pops up in my head that I read somewhere is that SQL will not automaticlly statistics on table variables. That is another handicap.
In some cases temp tables are the solution, in other cases table variables are. IT DEPENDS.
answered Sep 15, 2010 at 12:58 PM