question

Yudhbir avatar image
Yudhbir asked

Strange behaviour of select query with Join on CHANGETABLE

Hi Guys, While working towards optimization of a particular procedure in our application, I located the query which was taking around 60% of the whole batch. Then I tried to run that particular query separately and observed some strange behaviour. The query is as below; **DECLARE @assgnmentId INT,@changeID INT** **SET @changeID = 16548600** **SET @assgnmentId = 125** **SELECT** **1** **FROM cad.Assignment asi WITH(NOLOCK)** **INNER JOIN** **CHANGETABLE(CHANGES cad.Assignment, @changeID) AS liAsi** **ON asi.AssignmentID = liAsi.AssignmentID** **WHERE asi.AssignmentID = @assgnmentId** The above query takes few minutes to complete while it fetches simply 1 record with 1 column having value 1. Note that the join being with changetable is on identity column. However, If I simply select "***" instead of "1", the query runs without any time (not even 1 second). I searched a lot against it and found some texts on parameter sniffing (that may be the case with original query in our procedure) and used "OPTION (OPTIMIZE FOR (@assgnmentId = NULL))" and now the query works well with both "*" and "1". i.e. DECLARE @assgnmentId INT,@changeID INT SET @changeID = 16548600 SET @assgnmentId = 125 SELECT **1 -- * also works** FROM cad.Assignment asi WITH(NOLOCK) INNER JOIN CHANGETABLE(CHANGES cad.Assignment, @changeID) AS liAsi ON asi.AssignmentID = liAsi.AssignmentID WHERE asi.AssignmentID = @assgnmentId **OPTION (OPTIMIZE FOR (@assgnmentId = NULL))** Note that this is not working with "OPTION (RECOMPILE)". The sys.syscommittab table appears to have more than 80000000 records while the change tracking table for our "cad.Assignment" table seems to have less than 2000 records. While I was inspecting the actual execution plan of our procedure (and even this long running query), it was seeking and index on "sys.syscomittab". In our database the retention time for change tracking is 2 days with auto cleanup on. I saw some texts that suggests in sql server 2008 there were some problems in clean up of syscomittab table, but we are using SQL server 2012, so I think it should be ok as I can't see anything relevant to 2012. Can you guys guide me on how to explore; what's causing this? I have a solution of using OPTION clause, but I just don't know how its working as it doesn't seems to be parameter sniffing issue (the problem was getting replicated with literal values on actual query). Note that this is not the case with other change tables. Thanks, Yudhbir
performancechange-tracking
1 comment
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Grant Fritchey avatar image Grant Fritchey ♦♦ commented ·
Remember, the site works on votes. If an answer was helpful, click on the thumbs up next to it. You can vote for more than one answer this way. If an answer solves your problem, indicate this by clicking the check box next to it.
0 Likes 0 ·
Grant Fritchey avatar image
Grant Fritchey answered
There are a number of issues here that could be causing poor performance. First up, you have a table valued function. Is it a multi-statement table valued function? If so, those are notorious bottlenecks for performance. Since they use table variables they have no statistics and therefore the optimizer makes poor choices in implementing them. It sounds like you have a lot of data movement. Are you keeping your statistics up to date manually or just relying on the auto-update process? If you're counting on the auto-update process your stats could be way out of line. Getting them updated, possibly with a full scan will be helpful. You do know that putting the nolock hint on tables can lead to bad data being returned. Not only can you get dirty reads, "dog" when it should be "cat", but because of page splits and rearrangements you can miss rows or duplicate rows of data in your result sets. Instead of using nolock, examine whether your system can support snapshot isolation levels. That will reduce your locking contention without the danger of bad data. You might consider using OPTIMIZE FOR UNKNOWN instead of setting it to a NULL value. You'll get the same averaged sampling of the statistics. I couldn't say more without seeing the execution plan.
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Yudhbir avatar image
Yudhbir answered
Hi Grant, Thanks for you valued time. Meanwhile I am more close to original problem. I am aware of OPTIMIZE FOR UNKNOWN, but think that the query hints should be used when nothing is left. I am proposing below type of query which is working quite quickly. -- Check if any of the assignments have changed IF EXISTS ( SELECT * --Leave it as * and not change to 1 FROM CHANGETABLE(CHANGES cad.Assignment, @changeID) AS ct WHERE ct.AssignmentID = @assgnmentId ) BEGIN SET @assignmentsChanged = 1 END Unfortunately, the problem is very intermittent, and was getting reproduced only one of our test machine two days back. The function I am using is not my own function, but the one Microsoft has provided for change tracking functionality. I will definitely post the query plans once able to reproduce the same again. However, the query plans differs simply on an index scan in 'Sys.SysCommitTab' table which is an internal sql server table. This table holds an entry for each committed transaction which manipulates any change tracked table. Moreover another problem is that 'Sys.SysCommitTab' table is growing steadily (containing more than 1,80,00,000 records). We have set up a retention period for "2 Days", but this is not cleaning up this table. A Known issue from Microsoft (Refer to http://support.microsoft.com/kb/2446860). We can't use this fix as it is intended for SQL server 2008 and we are using 2012, (but the database have been configured to run on 2008 configuration). I think you can help me by looking into change tracking functionality of SQL server. I ran DBCC CHECKDB command in my database and it pointed out that "sys.syscommittab" table is corrupt. At the moment I am running DBCC CHECKDB WITH data loss option to get this fixed. :(
3 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Grant Fritchey avatar image Grant Fritchey ♦♦ commented ·
I don't have detailed knowledge of change tracking. Sorry. The corruption issue you'll have to fix, but running checkdb with data loss can result in data other than that one table going away. Be very cautious here. It sounds like you don't have regular consistency checks and backups in place. I'd get that done now, worry about performance later. Once the consistency is updated, it still sounds like you may be hitting statistics issues. How are they maintained?
0 Likes 0 ·
Yudhbir avatar image Yudhbir commented ·
Hi Grant, I am now able to empty the records from Sys.SysCommitTab table. The DAC connection provides access to this internal table and thus using a DAC connection I deleted records from this table and finally got rid of long running queries. The checkDB showed me inconsistencies only in this table; Table error: table 'sys.syscommittab' (ID 2089058478). Data row does not have a matching index row in the index 'si_xdes_id' (ID 2). Possible missing or invalid keys for the index row matching: Msg 8955, Level 16, State 1, Line 1 Data row (1:1430579:0) identified by (xdes_id = 9200628084 and commit_ts = 54463142) with index values 'xdes_id = 9200628084 and commit_ts = 54463142'. Thus I think the CheckDB with data loss won't cause a problem. What do you say? I am not sure how statistics are being maintained. "Auto Update Statistics" is set to True and "Auto Update Statistics Asynchronously" is false. I am very surprised, how replacing simply * with 1 in select query was picking different execution plan...
0 Likes 0 ·
Grant Fritchey avatar image Grant Fritchey ♦♦ commented ·
While the corruption may only be in that table, you don't know how the repair will deal with delinked pages. You may lose a page of data in that table only, or anything else. It's a gamble. The * replacing the 1 allows the optimizer to look at the table and determine indexes that might help. The 1 isn't a part of any of the tables so it doesn't help.
0 Likes 0 ·

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.