Phil Factor SQL Speed Phreak Competition: No 1
This competition is now over, but the winner, Peso, got an Amazon Voucher for $60, and the privilege of being able to display the 'Phil Factor SQL Speed Phreak' award on their own site
It was quite a struggle with some close competition from many of those who participated in this competition. However, Peso came up with a blindingly fast winner that produced an aggregation from a million rows in a third of a second. Now, we're all scratching our heads trying to come up with the fastest way of solving....
(here is the original preamble.)
I really genuinely don't know the answer to this question: What is the fastest way in SQL Server (any version) to provide the following subscription report.
It is a reasonable request. We have a subscription list with 10,000 subscribers and we need to do a management report that gives the monthly breakdown of the Date, the number of current subscribers at the end of the month, the number of resignations in the month ('Unsubscribes'), and the number of new subscribers. The list should be in date order, and the date should be just the date of the first day of the month.
The table is in this form (simplified from the way we'd do it in a real system of course)
CREATE TABLE [dbo].[Registrations] ( [Registration_ID] [int] IDENTITY(1, 1) NOT NULL, [FirstName] [varchar](80) NOT NULL, [LastName] [varchar](80) NOT NULL, [DateJoined] [datetime] NOT NULL, [DateLeft] [datetime] NULL, CONSTRAINT [PK_Registrations] PRIMARY KEY CLUSTERED ([DateJoined], [LastName], [FirstName]) ) CREATE INDEX idxDateJoined ON Registrations (DateJoined, DateLeft, Registration_ID)
You are welcome to change the two indexes to suit your solution. I'll give you a reasonable amount of data to try stuff out on. 10,000 faked data entries of subscribers is the most that I can reasonably ask you to download, (The list is here) but I shall be taking each solution, putting it in in a test harness consisting of a million subscribers. (I may even make it 1,169,187 to celebrate SQLServerCentral's subscription list) , and find the fastest way of doing it. I have ideas of my own of the way to do this but I suspect they're wrong.
Note that in the sample data the subscriptions that extend to sept 2010 are those people who’ve paid for a year’s subscription only rather than those which have ongoing renewals (e.g. direct Debit). ‘Now’ is the end of September 2009.
I will allow you to use a number table, but you can assume that every month has new subscribers in it. You can use views, or temporary table, but the time taken for their creation will be included in the timings. You can use a cursor if you don't mind a sharp intake of breath from Jeff. You can use any version of SQL Server that you like.
The winner will be amongst the tied fastest entrants (generally there is a group of these) and it will be the one with the highest number of votes. We'll announce the winner in a week's time on 19th October.
Who knows, if you are the only person who ever discovers this site and this competition, then the winner will be you!
OK End of day 1 and we've had some extremely good results (in milliseconds) from the original 10,000 data table
Matt 16 Kev Riley 93 Graham 60 Peso 33 Gianluca 16 AndyM 170 Joe Harris 15533 William brewer 406
I've had to temporarily disqualify Kev, AndyM and Joe Harris as I couldn't get their results to match the correct values. All the other results agreed. The results for Matt, Peso and Gianluca are all very close indeed, too close to get a result from 10,000 rows, so I'll have to up the test table to a million rows. From experience, I know that a lot could change between now and next Monday. I feel sure that Graham's SQL can be tweaked. and I expect to see the slow ones to be back in the running with some fine-tuning. Anyone else?
OK End of Day 2, and Peso has streaked ahead with his third version. it is now too fast to measure on 10000 rows so I shall have to move to a larger database for the performance tests. Now I've put result-checking into the test harness Gianluca has had to be temporarily disqualified until he can cure his result, which is slightly out.
Matt 16 ms Kev Riley 76 ms * Graham 60 ms Peso 30 ms Gianluca 33 ms * AndyM 170 ms * William brewer 406 ms Peso 2 16 ms Peso 3 0 ms
graham 1 6986ms Peso 1 890ms Peso 2 1173ms Peso 3 1170ms Matt 1 596ms Matt 2 873ms Gianluca 1 4546ms Peso 4B 940ms Andriy Z 1200ms * graham 2 1656ms
Currently re-creating the test harness! timings on the 1 Million row tables are as follows. remember that these are preliminary timings and Peso and I are trying to scratch our heads to see why we are getting such different results. In the test harness, the results are inserted into a table variable and subsequently checked for validity.
Entry Elapsed time in milliseconds graham 1 6983 ms Peso 1 890 ms Peso 2 1186 ms Peso 3 1173 ms Matt 1 576 ms Matt 2 860 ms Gianluca 1 4550 ms Peso 4B 936 ms Peso 4d 830 ms Peso 4e 313 ms Andriy Z 1203 ms Graham 2 1640 ms Brewer 2 406 ms Peso 1d 1076 ms Gustavo 1 580 ms Gianluca 4 2390 ms
..at the moment I write this, William Brewer's entry (incorporating ideas from Graham, Peso and Matt) seems to be in the lead! (Saturday, it is now Peso 4e) However, I'll be holding the competition open until Monday evening GMT to allow for people who didn't hear that the competition was on last Monday.
Just if you thought that some potential winners were emerging, look at the results with the original 10,000 row table. Yes, quite different. (We'd have to use a C# or VB test-harness to sort out the speed differences!) One would never be able to sort out the fastest from such close competition!
Entry Elapsed time in milliseconds graham 1 80 ms Peso 1 13 Peso 2 16 Peso 3 16 Matt 1 13 Matt 2 33 Gianluca 1 30 Peso 4B 16 Andriy Z 13 Graham 2 16 Brewer 2 30 Peso 1d 33 Gustavo 1 16 Gianluca 4 13 Peso 4d 16
The complete million long test table is now on http://www.simple-talk.com/blogbits/philf/registrations.zip if you would like to fine-tune your winning entry
Monday. Here are the final rankings These are the simple 'elapsed time' measurements. I'll follow this up as soon as possible with the details. Matt has very kindly written a special C# test harness for accurate timings that I'm dusting out at the moment. I'm planning to do a full write-up on my SSC blog as there are some important lessons for anyone doing this sort of reporting task. After all, there is a world of difference between the time and CPU loading of the best entrants and the Ho-hums. Even the Ho-hums are a lot better than some of the production code I've seen.
These timings are very tight so in all fairness, I have to also award Gustavo, as runner-up, the right to proudly display the 'Phil Factor SQL Speed Phreak' award. After all, what is 30 milliseconds in processing a million rows. Peso has to get the title, not only for the best entry (Peso 4E), but for his energy (he was on paternity leave!) and for the way he helped and advised the other entrants. A true champion. Special Thanks to Matt for all the advice, and the test harness.
Peso4E was such a good entry that it even struck terror into Celko-Prizewinner Barry, and decided him not to enter the competition. However William and Matt deserve special commendation for their entries which remain brilliantly fast over a million rows.
Peso 4E 313 Gustavo 3 343 Gustavo 4 346 Brewer 2 423 Peso 1e 470 Peso 1 500 Gustavo 1 563 Matt 1 580 Matt 1B 593 Peso 1d 856 GianLuca Final 856 Peso 4d 856 Peso 1D 860 Matt 2 873 Matt3 923 Peso 4B 940 Graham 2 1106 Peso 3 1156 Peso 2 1170 Andriy Z 1233 * Gustavo/peso 2800 Gianluca 1 4500 graham 1 4656
Remember that the winning routines were calculating aggregate reports on a million-row tables in between a third and half a second. These included calculations that you will sometimes read has to be done using a cursor!
(* in results means result needs minor tweaking to make it conform.
Lets' see if we can establish some kind of baseline metrics. This is the sample data I have in my testlab on my laptop
answered Oct 15, 2009 at 08:55 AM
Coulnd't resist to submit a quirky update just to compare results.
answered Oct 16, 2009 at 01:07 PM
Another try, since pre-aggregation is a proved to be good:
answered Oct 16, 2009 at 08:05 PM
Well, i may be out of the time, but its a last try with just a small improvement... But in my desktops tests i couldn't beat Peso's last one.
Edit: included a nonclustered index on dateleft, and acording to Peso's suggestion, it helped a little bit. Thanks.
Edit2: included an indirect pre-aggregation approach, that could be faster depending on the machine executed.
Using Peso's Indirect Approach on the pre-aggregation i got faster results on my notebook, but as he noticed, i might get slower results on some computers:
This is my last entry (I promise!): I can't measure if it runs fast or slow, because on my laptop ALL top fast queries take 3.2 seconds to run on the 1 million rows table. It's strange but it's so. It's a blind shot... I'm using a string to stuff the peopleLeft count without joining. Not very elegant, but it works. It runs on the assumption that nobody can unsubscribe before joining.
answered Oct 19, 2009 at 02:41 PM