Query to Compare row with its next now in a list of 10000 records using Database Procedure

Hi All,

Please help me with below mentioned scenario->

I wanna use while loop to fetch records between 500 to 100 from among 20000 records of a particular table.
Now after fetching these records ,I wanna compare the row to its next row and then calculate a row divider for every 5 rows.
Now I am puzzeled as to where used I use my sql in C# Page by retrieving data in dataset or handling the same in Database procedure.

Please suggest whether by which means of looping , I will get better performance. loop handled in Page or in database procedure.

Thnx Usman for ur comment!
to further explain my query ,pls find data as below ->
I have a table A(id int,name varchar(20),sal int)

data is ->


like wise I have 20000 records and now I wanna fetch aroun 500 to 1000 records randomly and then out of these fetched records i needs to compare every row with its next row and return maximum salary row

For ex->
consider sample data above I wanna compare row with id '1' with '2' and return max of sal from rows with id '1' and '2' ,likewise i need to compare all rows till last row that is row with id '6' .
That means if i have 6 records then 3 records will be returned with maximum salary for its subsequent counterpart.

o/p ->


So for these comparison purpose I was in a fix whether should i fetch the record into dataset into ASP.NET C# then should I handle it in ASP Page or shoule I handle it in Database level.Which one is better as far as performance gain is concerned.

It will be great if i get the logic for the same.

Hope this time I have clarified my query to u.
Looking for response from you.

more ▼

asked Mar 19, 2012 at 10:49 AM in Default

innovator gravatar image

335 15 15 17

Generally looping OR iterating row by row is much faster in programming language instead of TSQL. But if you could provide more details what you want to achieve, then may be a TSQL solution could be the option.
Mar 19, 2012 at 11:02 AM Usman Butt
Awaiting for u guys to revert!!
Mar 19, 2012 at 04:55 PM innovator
(comments are locked)
10|1200 characters needed characters left

1 answer: sort voted first

It appears that all you need is the select statement which tiles the records you return by 2 in the group and then from every such group you need to select the one of the two which has the higher salary. In this case, no fancy looping is required and all you really need is tile your records first and then number the records partitioned by the tile and ordered by salary descending. In this case, you will always get one record from every pair. In case if there is a pair where both records have the same salary then you will still get one record (probably the one which comes first on the per table's clustered index). Here is the script:

declare @n int;

-- use whatever desired number of records which you fetch, for example,
-- if you select 500 records then your @n will end up equal to 250.
-- in this example, I will just return all row in the table so the
-- number of the islands will be half of that count
select @n = count(1) / 2 from TableA;

;with islands as
        *, ntile(@n) over (order by id) island
        from TableA
records as 
        *, row_number() over (partition by island order by sal desc) N
        from islands
        id, name, sal 
        from records
        where N = 1;

Based on the sample data in the question, the query above returns the following results:

id    name   sal
----- ------ -----------
2     abc    1456
3     xx     1234
5     oo     1234

Hope this helps,

more ▼

answered Mar 19, 2012 at 05:24 PM

Oleg gravatar image

15.9k 2 4 24

Thnx a lot Oleg! can u throw some light in other point of whether the same logic can be done in ASP.NET Page to enhance the performance of the application?
Mar 20, 2012 at 02:53 AM innovator

@Oleg +1.

@Innovator The best thing could be to test it. But I have a strong feeling that TSQL solution could be better. Instead of sending all rows for the table, you could only be sending half of the rows with the logic already applied.
Mar 20, 2012 at 06:57 AM Usman Butt
@Usman Butt Thank you. @innovator There is no way that the front end code can do it considerably faster. You mentioned that your table is very small as it only has but 20K records. I tested the script versus one of the mid-size tables in my database (about 200K records or 10 times as many as yours) and the script returns 100K records in the results in just a little bit more than 1 second. With 20K test the execution time is consistently less than 0.1 second, which should be fast enough. A human eye can barely notice the time difference of less than 0.1 second between 2 events anyway, so the query returning results within this time trame should be considered as offering an acceptable performance. Additionally, as Usman already noted, it is typically a good idea to return just as many records to the front end code as needed and not more in order to reduce the needless traffic. Network latency is important when considering overall performance. With front end processing you either have to maintain 2 streams (one to read and one to write) or use a dataset which is inherently very heavy, so it would be hard to beat the 0.1 second performance offered by the plain T-SQL.
Mar 20, 2012 at 02:54 PM Oleg
thnx a ton oleg and usman for ur precious timne and effort!!
Mar 20, 2012 at 03:31 PM innovator
(comments are locked)
10|1200 characters needed characters left
Your answer
toggle preview:

Up to 2 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

SQL Server Central

Need long-form SQL discussion? SQLserverCentral.com is the place.



asked: Mar 19, 2012 at 10:49 AM

Seen: 1620 times

Last Updated: Mar 20, 2012 at 03:31 PM