question

dmeenan avatar image
dmeenan asked

SQL 2008 Row_Number SLOW with variables, FAST without

First, let me apologize if this is posted in the wrong forum, or is too esoteric, etc. That said,...we use some COTS software that doesn't use stored procedures, just parameterized sql using sp_executesql (SQL Server is using cached execution plans for the parameterized SQL, just as it would for a compiled stored proc, for the most part). Perhaps they did this to remain platform agnostic? The SQL Server tables in our system are often in the billion-row range, so if the query execution plan is bad, users suffer. In this case, the as-delivered parameterized SQL hits a view that uses a ranking function. Occasionally, I find that SQL doesn't chose a good execution plan for a query ONLY when it is wrapped in sp_executesql (or using variables). It ignores available indices and scans instead! Parameter sniffing issue? Column stats and index fragmentation is not the problem, AFAIK. Anyway, as a result, I've had to rewrite some views to stop using ranking functions when they hit deep tables, but can't understand/explain why the code is fast when I run it with explicit values in the where clause, versus when the code uses variables/sp_executesql. Shown below is a simple test case that very closely mimics what I see in production. Can anyone give me an explanation of why/what is going on? Why is a ranking function rendered non-functional with variables/parameterized SQL, EXCEPT if used in a stored procedure or with explicit values? Using terms like sargable or binding is okay :-)

select @@version =
Microsoft SQL Server 2008 (SP2) - 10.0.4000.0 (X64) 
	Sep 16 2010 19:43:16 
	Copyright (c) 1988-2008 Microsoft Corporation
	Enterprise Edition (64-bit) on Windows NT 6.0 
  
    (Build 6002: Service Pack 2)

------------------------
-- Create test tables --
------------------------
use tempdb
go
set nocount on
go
if object_id('tblPrimary') is not null drop table dbo.tblPrimary
go
if object_id('tblSecondary') is not null drop table dbo.tblSecondary
go
create table dbo.tblPrimary (RecID int identity(1,1) 
	constraint PK_tblPri primary key clustered, 
	EntityID int, 
	Acct char(3))
go
create table dbo.tblSecondary (RecID int identity(1,1) 
	constraint PK_tblSec primary key clustered, 
	AdminDate datetime)
go

--------------------------
-- Populate test tables --
--------------------------
declare @Acct char(3), @Counter int; set @Counter = 1
while @Counter <= 100000
begin

	select @Acct = case when datepart(ms,getdate()) % 2 = 0 then 'AAA' else 'BBB' end 
	insert into dbo.tblPrimary (EntityID, Acct) values (@Counter%5000, @Acct)

	insert into dbo.tblSecondary(AdminDate) 
		select cast(cast( getdate() as int)-((datepart(s,getdate())% 5)+5) * 
			rand(cast(cast(newid()as binary(8))as int))as datetime) AdminDate

	if @Counter % 500 = 0 print 'loop ' + cast(@counter as varchar(7))

	set @Counter = @Counter + 1 
end
create index IDX_tblPrimary_EntityID 
	on dbo.tblPrimary (EntityID)
go
create index IDX_tblSecondary_AdminDate 
	on dbo.tblSecondary(AdminDate desc)
go

-------------------------
-- Create Views, Sproc --
-------------------------
-----------------------------------------------
-- The COTS-supplied view; uses a Ranking    --
-- function with ORDER BY clause. Works fine --
-- with explicit values, not so much with    --
-- variables                                 --
-----------------------------------------------
if object_id('vwMostRecentDate_ByAcct') is not null 
	drop view dbo.vwMostRecentDate_ByAcct
go
create view dbo.vwMostRecentDate_ByAcct
as
	select 
		RecID, 
		AdminDate,
		EntityID,
		Acct
	FROM (select p.RecID, s.AdminDate, p.Acct, p.EntityID,  
		ROW_NUMBER() over (partition by EntityID, Acct 
			Order by AdminDate desc) as RowNum 
		FROM dbo.tblPrimary p with(nolock) 
		inner join dbo.tblSecondary s with(nolock) on s.RecID = p.RecID
		) t where t.RowNum = 1
go

---------------------------------------------------
-- Re-worked view; plays nice with sp_executesql --
-- Works fine however the data are pulled        --
---------------------------------------------------
if object_id('vwMostRecentDate_ByAcct_WorkAround') is not null 
	drop view dbo.vwMostRecentDate_ByAcct_WorkAround
go
create view dbo.vwMostRecentDate_ByAcct_WorkAround
as
	select 
		p.RecID, 
		s.AdminDate,
		p.EntityID,
		p.Acct
	FROM dbo.tblPrimary p with(nolock)
	inner join (select p.EntityID, p.Acct, max(s.AdminDate) AdminDate  
				FROM dbo.tblPrimary p with(nolock) 
				inner join dbo.tblSecondary s with(nolock) on s.RecID = p.RecID
				group by p.EntityID, p.Acct
		) x on x.EntityID = p.EntityID and x.Acct = p.Acct
	inner join dbo.tblSecondary s on s.RecID = p.RecID 
		and s.AdminDate = x.AdminDate
go

------------------------------------------
-- Using a stored proc, with Row_Number --
-- This is just academic, since the app --
-- doesn't use sprocs                   --
------------------------------------------
if object_id('spMostRecentDate_ByAcct') is not null 
	drop proc dbo.spMostRecentDate_ByAcct 
go
create proc dbo.spMostRecentDate_ByAcct (@EntityID int, @Acct char(3))
as
	select 
		RecID, 
		AdminDate,
		EntityID,
		Acct
	FROM (select p.RecID, s.AdminDate, p.Acct, p.EntityID,  
		ROW_NUMBER() over (partition by EntityID, Acct 
			Order by AdminDate desc) as RowNum 
		FROM dbo.tblPrimary p with(nolock) 
		inner join dbo.tblSecondary s with(nolock) on s.RecID = p.RecID
		where p.EntityID = @EntityID and p.Acct = @Acct
		) t where t.RowNum = 1 
go

---------------
-- Run tests --
---------------
set statistics io on
set statistics time on
---------------------------
-- INDEX SEEKS = FAST!!! --
---------------------------
--Stored proc, using Row_Number --
	exec dbo.spMostRecentDate_ByAcct @EntityID =6, @Acct='aaa'
--Not using Row_number, with parameters
	declare @EntityID int, @Acct char(3); select @EntityID = 6, @Acct = 'AAA'
	select *, 'Fast, no Row_Number, w/variables' [TestCase] 
		from dbo.vwMostRecentDate_ByAcct_WorkAround 
			where entityid = @EntityID and Acct = @Acct
--Using Row_Number
	select *, 'Fast, uses Row_Number, explicit values' [TestCase] 
		from dbo.vwMostRecentDate_ByAcct where entityid = 6 and Acct = 'aaa'
--Not using Row_Number
	select *, 'Fast, no Row_Number, explicit values' [TestCase] 
		from dbo.vwMostRecentDate_ByAcct_WorkAround where entityid = 6 and Acct = 'aaa'
--Not using Row_number, using sp_executesql
	exec sp_executesql N'
		SELECT *, ''Fast, no Row_number, sp_executesql'' [TestCase] 
			from dbo.vwMostRecentDate_ByAcct_WorkAround 
			where entityid = @EntityID and Acct = @Acct',
			N'@EntityID int, @Acct  nchar(3)',
			@EntityID = 6, @Acct=N'AAA'
go
--------------------------------			
-- TABLE SCAN, SORT = SLOW!!! --
--------------------------------
--Using Row_Number, with Parameters
	declare @EntityID int, @Acct char(3); select @EntityID = 6, @Acct = 'AAA'
		select *, 'Slow, uses Row_Number, w/variables' [TestCase]  
			from dbo.vwMostRecentDate_ByAcct
			where entityid = @EntityID and Acct = @Acct
--Using Row_Number, using sp_executesql
	exec sp_executesql N'
		SELECT *, ''Slow, uses Row_number, sp_executesql'' [TestCase] 
			from dbo.vwMostRecentDate_ByAcct
			where entityid = @EntityID and Acct = @Acct',
			N'@EntityID int, @Acct  nchar(3)',
			@EntityID = 6, @Acct=N'AAA'


  
sql-server-2008rankingoptimizer
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Usman Butt avatar image
Usman Butt answered

Personally, I do not like DYNAMIC QUERIES as there are so many factors get involved. But In your case, the problem is not with the **dynamic query** but it is more pertaining to the nature of the **window functions** (RANKING functions are part of the window functions).

These windows functions do work on the practically ready data set. Hence, First the sample is produced, all unions, conditions WHERE, GROUP BY, HAVING are executed (except the ordering of the data) and then the functionality is applied on the sample. This may be the reason why these functions can only be used in query selection list or in ordering conditions. So, when you would created the VIEW, this would have mean that the optimizer would always make the sample of the data for VIEW and then move forward to the execution of the query. So practically, the query would have become

      SELECT    *
      ,         'Slow, uses Row_Number, w/variables' [TestCase]
      FROM      ( SELECT    *
                  FROM      ( SELECT    p.RecID,s.AdminDate, p.Acct, p.EntityID
                              ,         ROW_NUMBER() OVER ( PARTITION BY EntityID,
                                                            Acct ORDER BY AdminDate DESC ) AS RowNum
                              FROM      dbo.tblPrimary p WITH ( NOLOCK )
                                        INNER JOIN dbo.tblSecondary s WITH ( NOLOCK )
                                        ON s.RecID = p.RecID
                            ) A
                  WHERE     RowNum = 1
                ) B
      WHERE     entityid = @EntityID
                AND Acct = @Acct 

Hence the plan would be

Find all matching rows based on INDEX SCANS (ROWID column) which is more costly, and then the filter would be applied.

Now, how to overcome this? It is pretty simple that we had to let the optimizer know that there are still some filters to be applied before the sampling. This could be done by an explicit recompile. (This would not be possible for all window functions. RANKING functions can take this benefit but Aggregated window functions would not). This way the optimizer would apply all the WHERE filters, group by etc again before the sampling. This would result in the same execution plan as it did for the scripts you mentioned above (ruuning fast)

--Using Row_Number, with Parameters
declare @EntityID int, @Acct char(3); select @EntityID = 6, @Acct = 'AAA'
        select *, 'Slow, uses Row_Number, w/variables' [TestCase]  
            from dbo.vwMostRecentDate_ByAcct
            where entityid = @EntityID 
            and Acct = @Acct OPTION (RECOMPILE)
--Using Row_Number, using sp_executesql
    exec sp_executesql N'
        SELECT *, ''Slow, uses Row_number, sp_executesql'' [TestCase] 
            from dbo.vwMostRecentDate_ByAcct
            where entityid = @EntityID 
            and Acct = @Acct OPTION (RECOMPILE)',
            N'@EntityID int, @Acct  nchar(3)',
            @EntityID = 6, @Acct=N'AAA'

Now to your question's part

Why is a ranking function rendered non-functional with variables/parameterized SQL,

As I explained above that it has nothing to do with variables/parameterized SQL. It is because of the nature of the WINDOW functions.

For the remaining part

EXCEPT if used in a stored procedure or with explicit values?

It is not the case either. You are not using the VIEW inside the stored procedure. Instead you are using a normal query. Hence when you did mention the WHERE clause, the optimizer was applying the filters first and then prepared the sample windows. You can test it by tweaking the stored procedure to point to the VIEW vwMostRecentDate_ByAcct i.e.

ALTER proc [dbo].[spMostRecentDate_ByAcct] (@EntityID int, @Acct char(3))
as
    select 
        RecID, 
        AdminDate,
        EntityID,
        Acct
    FROM [dbo].[vwMostRecentDate_ByAcct]
        where EntityID = @EntityID and Acct = @Acct

You will the same behavior as it was with the variables/Parametrized queries.

So using Window functions in VIEWs could result in a bad performance.

10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

WilliamD avatar image
WilliamD answered
@Usman Butt - I would be careful with a blanket statement like "do not use window functions in views". I have very successfully implemented window functions in views in the past. As you said yourself, the view is evaluated as a "normal" select statement by the query optimiser. What you need to consider here is what the OP has already mentioned as a possible reason for poor performance - paramater sniffing. This can an will rear its ugly head when using parameterised sql. I reckon parameter sniffing and stale stats are probably the two most common causes for bad query performance. I would suggest that the OP take a careful look at the plan cache and determine if there is plan re-use and if the cached plan is different to the fast running singleton query. I suspect they will find that the plans are different and the cached plan is just plain bad for the values being passed in. Something that needs to be considered, is that the query will probably run blazingly fast for a certain set of values (maybe only one set, but maybe more). The question is, is the query plan a good one for the majority of calls with differing values, or is it bad for the majority? You then need to decide what to do. If the majority of values are served well, is it worth worrying about the handful of poor performers (could be a valid argument, not saying it is right or wrong). Next, if it is mainly poor, then the plan needs to be purged from cache and a better one generated. Could this be achieved through different indexing, table structure, query design, missing/out of date statistics? Could a plan guide / query options help? What I am basically trying to say is, the root of the problem is probably not one single thing, rather a collection of reasons. Take a good look at the points I mentioned and I'm certain that you'll find something that makes the query work better
4 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

SQL Kiwi avatar image SQL Kiwi commented ·
Good general advice, but parameter-sniffing is not the root cause in this particular case.
2 Likes 2 ·
Usman Butt avatar image Usman Butt commented ·
@WilliamD Yes, I agree that my statement was a bit rigid especially being a follower of "it always depends". But I did add **it could** lead to a bad situation. I guess that is not enough. So I will change it to a much responsible statement. Now as far as the Stale stats, parameter sniffing, plan re-use is concerned, I did update the statistics in the test environment WITH FULLSCAN, Free all the cached plans, do give different values, Try to avoid the parameter sniffing by assigning the values to the local variables in the Stored Procedure. But all in vain. Only when I make sure that the statement is recompiled, an efficient plan is generated. I am saying an efficient plan as it goes for the index seeks to apply the filters and then move forward. This means that only 6 rows are returned from the index seeks and rest of the execution follows. Whereas, in case of without a RECOMPILE, the optimizer would start with a HASH JOIN (MERGE JOIN in case MAXDOP 1 is used) which clearly illustrates that the optimizer is not taking VIEW as normal statement. It is now taking it as a Sub-query (Just for the brevity). So you can imagine the index scans on 100000 rows on both tables would be very harmful. I will dig into more details once I will have more time to spent.
0 Likes 0 ·
WilliamD avatar image WilliamD commented ·
@Usmann - I understand that you were testing and that you were seeing that in your tests. However, a big issue I see here are the differences between your and the OP's systems. The OP talked about a table with **billions** of rows. I can almost guarantee that the stats on that table will be pretty crappy, unless the data is in a nice spread that fits how stats are collected in SQL Server. I am not saying your approach is wrong, just pointing out that the OP can take a look at the problem from another angle. My use of "you" may have been wrong in my answer, I was aiming that at the OP/no-one in particular more than at you directly.
0 Likes 0 ·
Usman Butt avatar image Usman Butt commented ·
@WilliamD I did not take any point on myself and never will :) You have contributed immensely and I respect all who have been always there to help the community. I was just trying to make my point clear that the problem is lying somewhere else. This issue is different from what generally the case is, and it was as it turns out to be. But can you please share the details about your implemented window functions in the VIEWs? I am asking it only to find out if there could be some situations where the WINDOW functions in a VIEW could help? (Only if it is other than the "no further filters to be applied" which I see is not a safe option in long run)
0 Likes 0 ·
dmeenan avatar image
dmeenan answered
Once again I've been gobsmacked by the sheer power of an answer from the "land of the long white cloud" (NZ); SQL Kiwi nailed it. This explanation was both accurate (verifed) and devoid of extraneous rhetorical flourish (unlike this post). Well done! Have an 8 Wired Batch 18 on me. Cheers.
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

SQL Kiwi avatar image
SQL Kiwi answered
*[Deleted my previous answer, because I have changed my mind about some of the details following deeper investigation (blog post to come)].* When `OPTION RECOMPILE` is specified (on suitably recent SQL Server builds) there is no problem: the [Parameter Embedding Optimization][1] means the optimizer sees the actual value of the variable at the time the execution plan is produced, so the resulting plan (which will not be cached) is the same as if a constant had been specified in the query instead of a variable. It is safe to push the constant-filter down when the `ROW_NUMBER`'s `PARTITION BY` clause references the pushed filtered column(s), because pushing the filter must not affect the numbering of rows. Where `OPTION RECOMPILE` is not used, things are a little more complex. The intent of the query writer is that the predicates involving the variables should be pushed all the way down the plan by the optimizer to the leaf level, eventually resulting in a seek operation. The optimizer does contain logic to do this in general, because pushing predicates as close as possible to the leaves of the plan ensures rows are eliminated as early as possible. This is a heuristic that normally pays off extremely well. Unfortunately, there is an optimizer limitation that prevents the variable predicate being pushed down past a Sequence Project/Segment combination (the iterators that implement ranking functions). My strong preference is not to use ranking functions in this way within views or subqueries. There is considerable scope to produce plans that will not perform well in practice, as this example demonstrates. More to the point, the intended usage in this question is that of a parameterized view - otherwise known as an in-line table-valued function. I appreciate that the view may be generated by code, but that doesn't mean that code can't be improved. Using a parameterized view allows us to specify explicitly where we want the filtering to occur: CREATE FUNCTION dbo.MostRecentByEntityAndAccount ( @EntityID integer, @Account char(3) ) RETURNS TABLE WITH SCHEMABINDING AS RETURN WITH Numbered AS ( SELECT tp.RecID, ts.AdminDate, tp.Acct, tp.EntityID, RowNum = ROW_NUMBER() OVER ( PARTITION BY tp.EntityID, tp.Acct ORDER BY ts.AdminDate DESC) FROM dbo.tblPrimary AS tp JOIN dbo.tblSecondary AS ts ON ts.RecID = tp.RecID WHERE -- This is where we want to filter! tp.EntityID = @EntityID AND tp.Acct = @Account ) SELECT Numbered.RecID, Numbered.AdminDate, Numbered.EntityID, Numbered.Acct FROM Numbered WHERE Numbered.RowNum = 1; GO DECLARE @EntityID integer = 6, @Acct char(3) = 'AAA'; SELECT * FROM dbo.MostRecentByEntityAndAccount(@EntityID, @Acct) AS mrbeaa WHERE mrbeaa.EntityID = @EntityID AND mrbeaa.Acct = @Acct; [1]: http://blogs.msdn.com/b/grahamk/archive/2009/11/18/changed-behaviour-of-option-recompile-syntax-in-sql-server-2008-sp1-cumulative-update-5.aspx
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.