question

Leo avatar image
Leo asked

Deadlock issue, which option shall I use?

Hi, I am having performance issue on OLTP database by getting DEADLOCK when I tried to insert the records in 2 different tables. My query is something like - Insert Into TalbeA (ColumnA) Select Column1 From TableB WHERE .... my query insert nearly half million of records every day into 2 tables while users are accessing those tables Online. Shall I use **Isolation Level** in my insert query? or Shall I use **WITH (NOLOCK)** option in SELECT Query? What kind of Isolation Level shall I use? Serializable? Thanks.
sql-server-2008deadlocklocking
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Kev Riley avatar image
Kev Riley answered
Neither. Serializable isolation will hurt your concurrency. You could use NOLOCK on your other selects, but is that acceptable? Reading dirty data? What resource is actually been deadlocked? Is it page resources on the table, or (key) indexes? Another option is to build deadlock-aware application code, so that if the online reads get chosen as the victim, they can re-issue the read and not 'fall over'. Don't try and fix the symptoms, fix the underlying problem.
2 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Leo avatar image Leo commented ·
NOLOCK is danger for me because our customer are looking at their Calls Data (Mobile/Lanline) and I don't want dirty read. **How shall I know which resource is deadlocked?** I used Profiler but it can't tell where is happen. Also neither on Red-Gate Monitor. It only says the Stored Procedure Name.
0 Likes 0 ·
Kev Riley avatar image Kev Riley ♦♦ commented ·
As well as the trace flags mention by @SirSQL, you'll need to do some reading about interpreting the output for diagnosing and troubleshooting deadlocks. I'll find some links and post them a bit later.
0 Likes 0 ·
SirSQL avatar image
SirSQL answered
Enable the following two trace flags (by either adding to the startup options or with a DBCC TRACEON command) to have the deadlocks write to the SQL log where you'll be able to see everything involved in the deadlocking transactions. -T1204 (sources & locks in deadlocks) -T1222 (resources & locks in xml format)
4 comments
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Leo avatar image Leo commented ·
Thanks for both of you. +1 for you both. Just restarted SQL Server with TRACEON. Let you know the result. Thanks.
0 Likes 0 ·
Leo avatar image Leo commented ·
@Kev and @SirSQL.... query is completed but didn't get deadlock today...it might be not much customer looking at the table where I am inserting to. Trace is ON but it take Server Space and 700MB left on system disk . Is that possible to use other Disk instead or shall I stop trace flag? Also I can't find xml file under \Microsoft SQL Server\MSSQL.n\MSSQL\LOG\... .xml? only ERRORLOG.txt
0 Likes 0 ·
Leo avatar image Leo commented ·
![alt text][1] [1]: /upfiles/Log_File_Viewer.jpg That is what I got in SQL Log... But it didn't give me enough information even Trace is ON. Any Idea? Thanks
0 Likes 0 ·
DaniSQL avatar image DaniSQL commented ·
Go to the LOG folder in the sql installation folder(usually something like C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG) and open the current Error log file in notepad and refer this posts to interpret the deadlock graph. http://blogs.msdn.com/b/bartd/archive/2006/09/09/deadlock-troubleshooting_2c00_-part-1.aspx , http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/06/05/troubleshooting-deadlocks-in-sql2005.aspx
0 Likes 0 ·
AaronBertrand avatar image
AaronBertrand answered
Why don't you remove the "users online" part? Make two extra schemas. Instead of updating these tables heavily while they're under concurrent use, make "shadow" tables in another schema. (And have a third schema for temporary holding.) This allows you to work on tables the users can't see, then switch them in using simply a metadata operation. Then you can do this: 1. truncate/re-populate the shadow tables (2 minutes, or maybe less with no contention) 2. start a transaction (sub-millisecond) 3. move the primary table to the holding schema, using ALTER SCHEMA ... TRANSFER (sub-millisecond) 4. move the shadow table to the dbo schema (sub-millisecond) 5. move the primary table to the shadow schema (sub-millisecond) 6. commit the transaction (sub-millisecond) 7. (optional) truncate the shadow table to recover some space (sub-second) At point 6. the commit may have to wait for current readers. One downside of this solution is that you will have two sets of stats, indexes etc. to maintain. For the stats you should be ok if the data is simply increasing and not changing substantially otherwise. Adam Haines has a really thorough write-up about this method (which I showed him a few years ago) here: http://jahaines.blogspot.com/2009/10/locking-table-while-it-is-being-loaded.html (I borrowed my own answer from this recent [StackOverflow question][1] but have used this type of solution in many scenarios over the past few years.) [1]: http://stackoverflow.com/questions/6549155/sql-server-concurrency-issue/6549396#6549396
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.