sindhu avatar image
sindhu asked

Is it safe to use global temporary table in triggers

I am sending the result set to a global temp table in triggers like below: The code is in dynamic sql. My question is since the global temp table is accessabile for all sessions,if the same trigger is fired again will the old values get deleted from the global temp and new values will be inserted? or if the same trigger gets called more that once at the same time (ie: two simultaneous inserts ) will it be ok that the same global tmp table name is used ? 'select * into ##pivot from (SELECT rcet.type ,rce.value, FROM tablet rcet with(nolock) inner join inserted rce on rce.reward_cart_extra_type_id = rcet.reward_cart_extra_type_id where rcet.company_id = 12 ) p_rue_r_a PIVOT ( Max(value) for type in ('+@usercolumns+') ) p_rue_r_b '
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

@Oleg It's because all your latest answers are too good! :) Good to see you back
1 Like 1 ·
@Kev Riley ♦♦ @ThomasRushton ♦♦ Looks like I am still having some trouble with some of my answers. Just answered this one and it went into moderation. Could you please approve it? Thank you.
0 Likes 0 ·
@Kev Riley ♦♦ Thank you Kevan. It is good to be back.
0 Likes 0 ·

1 Answer

Oleg avatar image
Oleg answered
No, it is not safe unless the insert/update is wrapped into transaction. Default isolation level is read committed, so when the second transaction attempts to, say, drop the global temp table while the first transaction is still running, the second transaction will be blocked until the first is committed or rolled back. For example, open 2 SSMS windows and type this code: window A: begin tran; if exists ( select 1 from tempdb.sys.objects where [object_id] = object_id(N'tempdb..##test') and [type] = 'U' ) drop table ##test; create table ##test(ColumnA int not null); insert into ##test values (1), (2), (3); -- wait 20 seconds to demonstrate that the rows are still available -- after that even though there could ghave been other transaction -- already started waitfor delay '00:00:20'; select * from ##test; -- this will show 1, 2, 3 drop table ##test; commit tran; go Window B: begin tran; -- because this transaction started after the one in window A, -- it is blocked until the other transaction is completed. This -- means that the table is already dropped by transaction A -- by the time this code is allowed to proceed. Thus the table -- will be created again if not exists ( select 1 from tempdb.sys.objects where [object_id] = object_id(N'tempdb..##test') and [type] = 'U' ) create table ##test(ColumnA int not null); -- and it will not have any rows yet if exists (select 1 from ##test) select * from ##test; -- now it will insert into ##test values (4), (5), (6); -- don't wait any time here to demonstrate that even though this transaction -- is supposed to take much shorter time to execute, it is going to have -- to wait until another transaction is completed (committed or rolled back) select * from ##test; -- this will show 4, 5, 6 drop table ##test; commit tran; Start running script in window A and then quickly start one in window B. Even though window B transaction does not have any explicit delays, it will still take long time to execute because the first transaction blocks it and the first transaction has a 20 seconds delay. After 20 seconds you will see 1, 2, 3 in window A and then immediately you will see 4, 5, 6 in window B. This shows that wrapping the scripts into transactions avoids the problems. It is a different story if there are no transactions. For example, comment out ***begin tran*** / ***commit tran*** lines in both windows and then do the same what was done last time, i.e. the script in windows A started executing and then the script in window B started executing soon after. This time there are no transactions so when the script in window A creates temp table and inserts 1, 2, 3, it is then instructed to wait 20 seconds (just to imitate long running script). In a mean time, the script in window B kicks in and happily selects rows 1, 2, 3. After that the script in window B inserts 3 more rows and then selects 1, 2, 3, 4, 5, 6. Note that this time the second script did not create the table, it just used it as it was created in window A. At this point, the script in window B is completed. The script in window A wakes up after 20 seconds waiting time and then attempts to select rows from the temp table. The temp table is gone though so you will get error message reading "Msg 208, Level 16, State 0, Line 15 Invalid object name '##test'.". The bottom line is that transactions exist specifically to address these kinds of problems. If wrapping the insert / update is not an option then you can make the name of the global temp table unique. The only reason you need it to be global so that it can safely come out of the dynamic script. For example, the line below generates a random number left padded with zeroes if necessary to ensure that the result is always 6 digits long. You can use something like this to make the table name unique inside of the trigger. select replace(str(abs(checksum(newid())) % 100000, 6), ' ', '0') Or even simpler: select cast(newid() as varchar(36)) Use one of the above as a uniquefier for your table name. Hope this helps. Oleg
10 |1200

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.