question

jelongpark avatar image
jelongpark asked

Performance issue on SQL 2019 Standard edition

We have recently migrated from SQL Server 2012 Enterprise edition to SQL Server 2019 Standard edition. We don't use any of Enterprise edition features, so no problem to restore the DB backup to 2019 server.

The new and old server hardware specs are almost same, 128GB RAM, 32CPU cores. only windows and SQL Server versions are different.

Just after the 2012 db backup restored on 2019 server, changed the db compatibility level to 150. Then thigs are happening.

We found lots of performance degradation in several Views, SPs and agent job scripts.
Especially a query has several UNION or CTE the speed like 100-200 times slower than earlier.

But if I set the DB compatibility level to 110, all the performances are back to normal.!


I am quite confusing how it happens. Any opinions are welcome.


performancemigrationupgrade
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Jeff Moden avatar image
Jeff Moden answered

I only have have one honest opinion on this but it's not fit for print. :D 2019 is yet another proof of the old adage that "Change is inevitable... change for the better is NOT!'. :( 2019 has be scared to death because we're talking up "upgrading" from 2016 in the near future. After the train-wreck known as 2017 RTM (which I installed on my laptop), I'm not looking forward to any more changes. Like I tell people, I love the now out-of-support versions because MS isn't screwing them up with more changes. :D

Since you're going from 2012 to 2019, you might want to change the setting for the "Legacy Cardinality Estimator". That setting wasn't available in 2016 when we migrated from 2012 and so we had to use the Trace Flag to set it to use the old one but it fixed all the problems the migration caused without having to be relegated to using an older compatibility level.


10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Magnus Ahlkvist avatar image
Magnus Ahlkvist answered

My routine for doing upgrades to SQL Server 2016 or later is:

- Upgrade instance. Keep databases on their old compatibiliity level.

- Enable Query Store for the databases. Make sure it keeps enough data. Enough data being: "A period of activity including periodic batch jobs like month-end-closings etc" x 3. That way you get a recorded baseline plus two times more data for recording changes in behaviour.

- Change compatibility to the version of the instance.

- Use query store database to find statements with regression. The regressed queries report usually isn't good enough, I write my own scripts for querying query store data.

- Fix queries with regression (this is the tough part...)

If there are too many queries with regression, you might want to consider permanently following Jeff Modens advice on using Legacy Cardinality Estimation for the database in question. If there are few queries with regression, work on getting those specific queries to perform better. The fast fix is usually to enable legacy cardinality estimation.

New cardinality estimation formula is sometimes absurd. An example. When a value is higher than the highest value recorded in a statistics histogram, the old-old behaviour was to estimate one row returned. With trace flag 2389 the estimation would instead be to use density vector. With new CE, the estimation is the lower of the values SQRT(Total rowcount for the table) and value from using density vector. In many cases, SQRT(Total Rowcount) is muuuuuuuuch worse than using density vector. I'm yet to find a case when using density vector is a worse call than using the SQRT-calculation.

There are more oddities.

With that said: It's hard to estimate cardinality. I use a Cars-table as an example. The table has columns BrandName, ModelName and Color. BrandName and ModelName are clearly hierarchical - ModelName is a functional dependency of ModelName. While Color is independent from the other two columns. When having all three of these in predicates in a query, how would we make estimation? In old CE, it's a simple

selectivity-of-first-predicate x selectivity-of-second-predicate x selectivity-of-third-predicate etc

In new CE, SQL Server expects a hierarchical dependency. And will therefore do:

selectivity-of-first-predicate x SQRT(selectivity-of-second-predicate) x SQRT(SQRT(selectivity-of-third-predicate)) etc

Sometimes, this makes a huge difference. And usually new CE gives a worse estimate than old CE.


10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.