I made a tabulation software where a maximum of 20 judges will simultaneously access my SQL Table.
INFO 1: In order to let them change records at the same time, I set each judge to use their own respective row.
INFO 2: ROWS = Max of 20, columns can be 20 to 300, depending on the event.
INFO 3: ONE ROW = ONE JUDGE
Question 1: Would it be possible to use the same row but save at different columns at the same time? Or was my approach already the best/easiest way?
Question 2: Would there be a difference in performance if I have say, 50 columns and 300 columns?
Question 3: Just around how much bandwidth am I using everytime I update my table adapter with 20 rows and 100 columns? It's a total of 2000 cells and each as (nvarchar 50)...
I'm using MSSQL Server 2014, BTW.
asked Dec 01, 2014 at 06:28 AM in Default
I would absolutely suggest tossing this single table design and go with data normalization. You're attempting to somehow control locking and blocking by building out a large table and isolate data access, but that's the wrong way to go about it. You're attempting to fight the data management engine rather than understand it and work with it.
answered Dec 01, 2014 at 10:23 AM
Grant Fritchey ♦♦
It sounds like the data model is not quite right.
What do the columns represent? Why/How can they vary from 20 to 300? It would be better if you could define a data model that doesn't need to be dynamic.
answered Dec 01, 2014 at 08:35 AM
Kev Riley ♦♦
Why not invert your model to use rows instead of columns? Add a new row each time there's a new judge.....this translates into far less DDL queries where you're changing the table structure far too often for comfort (and if your program is scaled at all you will encounter tremendous workload that would otherwise be very avoidable.
If your number of columns in your model varies then you are going to be establishing a schema modify lock each time you do this. http://msdn.microsoft.com/en-us/library/ms190273.aspx
Help us understand WHY you wish to do it this way verus using the row level locking? SQL 2014 has greatly improved in memory performance that you might be missing out on. If you honestly seek to avoid lock contention then you might encounter the opposite of the desired result if you do many DDL queries on the table in any kind of OLTP environment.
Let's presume you're talking about an Olympics style event (judges) where you have many entries into each contestant's scores. I presume your previous attempts to have many updates in a short timeframe all occurring at the end of one contestant's performance then yes you might be dealing with one row only lock contention at a time. Per Grant's recommendation I concur that you should normalize your data model more so that you maybe have one table (Judges) containing new rows for every new judge with unique identities. A different table perhaps (Contestant) containing new rows for every participant in the events. Yet another table for (Event) containing unique rows for every event and so on.
Thereafter I would recommend having each judge update scores in this manner where you will spread your lock contention risk over at least three tables which will then only put you at risk of deadlock when two judges are attempting to change the same score on the same judge row at a time (unlikely unless someone's cheating your system).
I hope that expanded explanation meets your goals and further explains Grant's direct answer so you can understand why your design is not desirable. If we are misunderstanding your goal then please update so we can be more specific.
answered Dec 01, 2014 at 07:39 PM