why is 'where a = 3' is faster than set @x = 3; where a = @x
Here is part of a long and complex inline table query declare @HBID bigint = 3 ..... join HB.HBP.HBV_ValueIntALL v on v.HBValue = @HBID and v.SQLID = r.SQLID and v.FieldName = f.Name and v.ModuleID = r.ModuleID The entire query runs in 30 seconds. it runs in 1 second if I have join HB.HBP.HBV_ValueIntALL v on v.HBValue = 3 and v.SQLID = r.SQLID and v.FieldName = f.Name and v.ModuleID = r.ModuleID Can anyone please help me get the better performance, but with a parameter?
I see 2 potential culprits here. I think the most likely cause is that the column HBValue is not a bigint but something that can be implicitly converted to one. When comparing 2 values of different types, SQL Server will attempt an implicit conversion if possible. Those conversions can only go from a type with a smaller range to a larger one. (See the Immplicit Conversions section [here].) If HBValue is an int, SQL Server cannot convert implicitly the bigint 3, so it must convert every column value from int to bigint. We had a similar issue recently because an ORM was forcing implicit conversions from varchar to nvarchar. Another possibility is parameter sniffing, but that seems less likely because you are using the same value in your example. In this case, what happens is a plan is cached based on one value, but that plan is much less efficient with a different value. If the 2 queries are hard-coded, it uses 2 plans. But when a parameter is used, SQL Server will attempt to reuse the plan that was already generated because the only difference is the parameter. Like I said, I think the data type is the most likely issue. If you do not have a tool like SSMS to easily check that with the Object Explorer, you can run this query to confirm. SELECT * FROM INFORMATION_SCHEMA.COLUMNS c WHERE c.TABLE_SCHEMA = 'myschema' AND c.TABLE_NAME = 'mytable' AND c.COLUMN_NAME = 'HBValue' ; :