Hi, I am having performance issue on OLTP database by getting DEADLOCK when I tried to insert the records in 2 different tables. My query is something like - Insert Into TalbeA (ColumnA) Select Column1 From TableB WHERE .... my query insert nearly half million of records every day into 2 tables while users are accessing those tables Online. Shall I use **Isolation Level** in my insert query? or Shall I use **WITH (NOLOCK)** option in SELECT Query? What kind of Isolation Level shall I use? Serializable? Thanks.
Neither. Serializable isolation will hurt your concurrency. You could use NOLOCK on your other selects, but is that acceptable? Reading dirty data? What resource is actually been deadlocked? Is it page resources on the table, or (key) indexes? Another option is to build deadlock-aware application code, so that if the online reads get chosen as the victim, they can re-issue the read and not 'fall over'. Don't try and fix the symptoms, fix the underlying problem.
Enable the following two trace flags (by either adding to the startup options or with a DBCC TRACEON command) to have the deadlocks write to the SQL log where you'll be able to see everything involved in the deadlocking transactions. -T1204 (sources & locks in deadlocks) -T1222 (resources & locks in xml format)
Why don't you remove the "users online" part? Make two extra schemas. Instead of updating these tables heavily while they're under concurrent use, make "shadow" tables in another schema. (And have a third schema for temporary holding.) This allows you to work on tables the users can't see, then switch them in using simply a metadata operation. Then you can do this: 1. truncate/re-populate the shadow tables (2 minutes, or maybe less with no contention) 2. start a transaction (sub-millisecond) 3. move the primary table to the holding schema, using ALTER SCHEMA ... TRANSFER (sub-millisecond) 4. move the shadow table to the dbo schema (sub-millisecond) 5. move the primary table to the shadow schema (sub-millisecond) 6. commit the transaction (sub-millisecond) 7. (optional) truncate the shadow table to recover some space (sub-second) At point 6. the commit may have to wait for current readers. One downside of this solution is that you will have two sets of stats, indexes etc. to maintain. For the stats you should be ok if the data is simply increasing and not changing substantially otherwise. Adam Haines has a really thorough write-up about this method (which I showed him a few years ago) here:
http://jahaines.blogspot.com/2009/10/locking-table-while-it-is-being-loaded.html (I borrowed my own answer from this recent [StackOverflow question] but have used this type of solution in many scenarios over the past few years.) :