x

The 'Subscription List' SQL Problem

Phil Factor SQL Speed Phreak Competition: No 1

This competition is now over, but the winner, Peso, got an Amazon Voucher for $60, and the privilege of being able to display the 'Phil Factor SQL Speed Phreak' award on their own site

It was quite a struggle with some close competition from many of those who participated in this competition. However, Peso came up with a blindingly fast winner that produced an aggregation from a million rows in a third of a second. Now, we're all scratching our heads trying to come up with the fastest way of solving....

The ‘FIFO Stock Inventory’ SQL Problem

...on this site


(here is the original preamble.)

I really genuinely don't know the answer to this question: What is the fastest way in SQL Server (any version) to provide the following subscription report.

It is a reasonable request. We have a subscription list with 10,000 subscribers and we need to do a management report that gives the monthly breakdown of the Date, the number of current subscribers at the end of the month, the number of resignations in the month ('Unsubscribes'), and the number of new subscribers. The list should be in date order, and the date should be just the date of the first day of the month.

The table is in this form (simplified from the way we'd do it in a real system of course)

CREATE TABLE [dbo].[Registrations]
    (
     [Registration_ID] [int] IDENTITY(1, 1)
                             NOT NULL,
     [FirstName] [varchar](80) NOT NULL,
     [LastName] [varchar](80) NOT NULL,
     [DateJoined] [datetime] NOT NULL,
     [DateLeft] [datetime] NULL,
     CONSTRAINT [PK_Registrations] PRIMARY KEY CLUSTERED 
    	([DateJoined], [LastName], [FirstName])
    )
CREATE INDEX idxDateJoined 
    ON Registrations (DateJoined, DateLeft, Registration_ID)

You are welcome to change the two indexes to suit your solution. I'll give you a reasonable amount of data to try stuff out on. 10,000 faked data entries of subscribers is the most that I can reasonably ask you to download, (The list is here) but I shall be taking each solution, putting it in in a test harness consisting of a million subscribers. (I may even make it 1,169,187 to celebrate SQLServerCentral's subscription list) , and find the fastest way of doing it. I have ideas of my own of the way to do this but I suspect they're wrong.

Note that in the sample data the subscriptions that extend to sept 2010 are those people who’ve paid for a year’s subscription only rather than those which have ongoing renewals (e.g. direct Debit). ‘Now’ is the end of September 2009.

I will allow you to use a number table, but you can assume that every month has new subscribers in it. You can use views, or temporary table, but the time taken for their creation will be included in the timings. You can use a cursor if you don't mind a sharp intake of breath from Jeff. You can use any version of SQL Server that you like.

The winner will be amongst the tied fastest entrants (generally there is a group of these) and it will be the one with the highest number of votes. We'll announce the winner in a week's time on 19th October.

Who knows, if you are the only person who ever discovers this site and this competition, then the winner will be you!


OK End of day 1 and we've had some extremely good results (in milliseconds) from the original 10,000 data table

 Matt 16 Kev Riley 93 Graham 60 Peso 33 Gianluca 16 AndyM 170 Joe Harris 15533 William brewer 406 

I've had to temporarily disqualify Kev, AndyM and Joe Harris as I couldn't get their results to match the correct values. All the other results agreed. The results for Matt, Peso and Gianluca are all very close indeed, too close to get a result from 10,000 rows, so I'll have to up the test table to a million rows. From experience, I know that a lot could change between now and next Monday. I feel sure that Graham's SQL can be tweaked. and I expect to see the slow ones to be back in the running with some fine-tuning. Anyone else?


OK End of Day 2, and Peso has streaked ahead with his third version. it is now too fast to measure on 10000 rows so I shall have to move to a larger database for the performance tests. Now I've put result-checking into the test harness Gianluca has had to be temporarily disqualified until he can cure his result, which is slightly out.
Matt                 16 ms          
Kev Riley            76 ms *         
Graham               60 ms         
Peso                 30 ms         
Gianluca             33 ms *         
AndyM                170 ms *       
William brewer       406 ms        
Peso 2               16 ms         
Peso 3               0 ms           
  • Result of SQL is incorrect
    Day 3. Due to a couple of glitches I haven't been able to complete all the timings on the million-row table. Here is what I've done so far!
graham 1        6986ms  
Peso 1           890ms  
Peso 2          1173ms  
Peso 3          1170ms  
Matt 1           596ms  
Matt 2           873ms  
Gianluca 1      4546ms  
Peso 4B          940ms  
Andriy Z        1200ms *
graham 2        1656ms  


Day 5

Currently re-creating the test harness! timings on the 1 Million row tables are as follows. remember that these are preliminary timings and Peso and I are trying to scratch our heads to see why we are getting such different results. In the test harness, the results are inserted into a table variable and subsequently checked for validity.

Entry       Elapsed time in milliseconds
graham 1    6983 ms
Peso 1       890 ms
Peso 2      1186 ms
Peso 3      1173 ms
Matt 1       576 ms
Matt 2       860 ms
Gianluca 1  4550 ms
Peso 4B      936 ms
Peso 4d      830 ms
Peso 4e      313 ms
Andriy Z    1203 ms
Graham 2    1640 ms
Brewer 2     406 ms
Peso 1d     1076 ms
Gustavo 1    580 ms
Gianluca 4  2390 ms

..at the moment I write this, William Brewer's entry (incorporating ideas from Graham, Peso and Matt) seems to be in the lead! (Saturday, it is now Peso 4e) However, I'll be holding the competition open until Monday evening GMT to allow for people who didn't hear that the competition was on last Monday.

Just if you thought that some potential winners were emerging, look at the results with the original 10,000 row table. Yes, quite different. (We'd have to use a C# or VB test-harness to sort out the speed differences!) One would never be able to sort out the fastest from such close competition!

Entry       Elapsed time in milliseconds
graham 1    80 ms
Peso 1      13
Peso 2      16
Peso 3      16
Matt 1      13
Matt 2      33
Gianluca 1  30
Peso 4B     16
Andriy Z    13
Graham 2    16
Brewer 2    30
Peso 1d     33
Gustavo 1   16
Gianluca 4  13
Peso 4d     16

The complete million long test table is now on http://www.simple-talk.com/blogbits/philf/registrations.zip if you would like to fine-tune your winning entry


Monday. Here are the final rankings These are the simple 'elapsed time' measurements. I'll follow this up as soon as possible with the details. Matt has very kindly written a special C# test harness for accurate timings that I'm dusting out at the moment. I'm planning to do a full write-up on my SSC blog as there are some important lessons for anyone doing this sort of reporting task. After all, there is a world of difference between the time and CPU loading of the best entrants and the Ho-hums. Even the Ho-hums are a lot better than some of the production code I've seen.

These timings are very tight so in all fairness, I have to also award Gustavo, as runner-up, the right to proudly display the 'Phil Factor SQL Speed Phreak' award. After all, what is 30 milliseconds in processing a million rows. Peso has to get the title, not only for the best entry (Peso 4E), but for his energy (he was on paternity leave!) and for the way he helped and advised the other entrants. A true champion. Special Thanks to Matt for all the advice, and the test harness.

Peso4E was such a good entry that it even struck terror into Celko-Prizewinner Barry, and decided him not to enter the competition. However William and Matt deserve special commendation for their entries which remain brilliantly fast over a million rows.

Peso 4E       313
Gustavo 3     343
Gustavo 4     346
Brewer 2      423
Peso 1e       470
Peso 1        500
Gustavo 1     563
Matt 1        580
Matt 1B       593
Peso 1d       856
GianLuca Final 856
Peso 4d       856
Peso 1D       860
Matt 2        873
Matt3         923
Peso 4B       940
Graham 2      1106
Peso 3        1156
Peso 2        1170
Andriy Z      1233 *
Gustavo/peso  2800
Gianluca 1    4500
graham 1      4656

Remember that the winning routines were calculating aggregate reports on a million-row tables in between a third and half a second. These included calculations that you will sometimes read has to be done using a cursor!

(* in results means result needs minor tweaking to make it conform.

more ▼

asked Oct 11 '09 at 06:51 PM in Default

Phil Factor gravatar image

Phil Factor
3.8k 8 9 16

Phil: Do we have to account for those expirations in the future or not? As you said that we could assume that every month has new subscribers, it seems to me that we do not (because those future months have no new subscribers yet)?
Oct 13 '09 at 07:58 PM RBarryYoung
Hmm, something odd here, I'm getting results of ~30ms for Peso #3.
Oct 13 '09 at 09:58 PM RBarryYoung
I can't really compare my results - because I didn't even run it with 10K rows - i went straight to a million... Are you doing the same Phil?
Oct 14 '09 at 05:58 AM Matt Whitfield ♦♦

I especially liked this competition due to the fact there was no template for a solution! And I personally would like to keep these competitions this way. Real world problems that everyone might encounter one day.

The first competition (Celko's Prime number) already had existing algorithms so that competition was just about adapting them to T-SQL. Not so much fun even if it was educational. Congrats again Barry!

The second competition (Celko's Data Warehouse) was more fun because there was no given solution, and was a real world problem. Congrats again Gianluca!

What do you think?
Oct 21 '09 at 08:59 AM Peso
Yes. I've just spoken to Richard of Red-Gate Marketing, who are our sponsors, and he's agreed to keep the prize money going. I'm tied up for the next fortnight, but I'm happy to advise on tests. (I'm taking a week off next week to stare at seal colonies through telescopes), then I'll be at PASS staring at Brent, and various MVPs, through binoculars. The competition will be in very safe hands with Peso!
Oct 21 '09 at 10:11 AM Phil Factor
(comments are locked)
10|1200 characters needed characters left

35 answers: sort newest
/*******************************************************************************
    Peso 4d - 20091015
*******************************************************************************/

-- Step 1 - Create an intermediate staging table
-- Table variable will not do (even with no logging) because table variables
-- cannot benefit from parallelism. I am using a temp table for the 103 months
-- or records between Mar 2001 and Sep 2009, with each record is 14 bytes
CREATE TABLE    #Stage
    	(
    		theMonth SMALLINT NOT NULL,
    		PeopleJoined INT NOT NULL,
    		PeopleLeft INT NOT NULL,
    		Subscribers INT NOT NULL
    	)

-- Step 2 - Populate the staging table
INSERT  	#Stage
    	(
    		theMonth,
    		PeopleJoined,
    		PeopleLeft,
    		Subscribers
    	)
SELECT  	u.theMonth,
        	-- Old school pivoting is slightly more efficient than PIVOT
        	-- It also gives us the ability to return 0 instead of NULL
    	SUM(CASE WHEN u.theCol = 'DateJoined' THEN u.Registrations ELSE 0 END) AS PeopleJoined,
    	SUM(CASE WHEN u.theCol = 'DateLeft' THEN u.Registrations ELSE 0 END) AS PeopleLeft,
    	0 AS Subscribers
FROM    	(
    		-- Do the full aggregation with final key before PIVOT
    		SELECT		DATEDIFF(MONTH, 0, DateJoined) AS DateJoined,
    				DATEDIFF(MONTH, 0, DateLeft) AS DateLeft,
    				SUM(Registrations) AS Registrations
    		FROM		(
    		    			-- Do some heavy-lifting pre-aggregation
    		    			-- It is better to UNPIVOT about 3,450 records (number of days since March 2001, plus 10% drop-offs)
        					-- than 1,169,187 records directly (average of 373 registrations per day)
    					SELECT		DateJoined,
    							DateLeft,
    							COUNT(*) AS Registrations
    					FROM		dbo.Registrations
    					GROUP BY	DateJoined,
    							DateLeft
    				) AS d
    		GROUP BY	DATEDIFF(MONTH, 0, DateJoined),
    				DATEDIFF(MONTH, 0, DateLeft)
    	) AS d
UNPIVOT 	(
    		theMonth
    		FOR theCol IN (d.DateJoined, d.DateLeft)
    	) AS u
GROUP BY    u.theMonth
    	-- Exclude those records for months not having any new subscribers
HAVING  	SUM(CASE WHEN u.theCol = 'DateJoined' THEN u.Registrations ELSE 0 END) > 0

-- Prepare running total
DECLARE @Subscribers INT = 0

-- Set up and prepare an ordered update CTE
;WITH Yak (theMonth, PeopleJoined, PeopleLeft, Subscribers)
AS (
    SELECT		TOP 2147483647
    		DATEADD(MONTH, theMonth, 0) AS theMonth,
    		PeopleJoined,
    		PeopleLeft,
    		Subscribers
    FROM		#Stage
    ORDER BY	theMonth
)

-- Step 3 - Do both the running total and the result output
UPDATE  Yak
SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft
OUTPUT  inserted.theMonth,
    inserted.PeopleJoined,
    inserted.PeopleLeft,
    inserted.Subscribers


/*******************************************************************************  Peso 4e - 20091017 *******************************************************************************/ CREATE TABLE #Stage  (  theMonth SMALLINT NOT NULL,  PeopleJoined INT NOT NULL,  PeopleLeft INT NOT NULL,  Subscribers INT NOT NULL  )

INSERT #Stage ( theMonth, PeopleJoined, PeopleLeft, Subscribers ) SELECT u.theMonth, SUM(CASE WHEN u.theCol = 'DateJoined' THEN u.Registrations ELSE 0 END) AS PeopleJoined, SUM(CASE WHEN u.theCol = 'DateLeft' THEN u.Registrations ELSE 0 END) AS PeopleLeft, 0 AS Subscribers FROM ( SELECT DATEDIFF(MONTH, 0, DateJoined) AS DateJoined, DATEDIFF(MONTH, 0, DateLeft) AS DateLeft, COUNT(*) AS Registrations FROM dbo.Registrations2 GROUP BY DATEDIFF(MONTH, 0, DateJoined), DATEDIFF(MONTH, 0, DateLeft) ) AS d UNPIVOT ( theMonth FOR theCol IN (d.DateJoined, d.DateLeft) ) AS u GROUP BY u.theMonth HAVING SUM(CASE WHEN u.theCol = 'DateJoined' THEN u.Registrations ELSE 0 END) > 0

DECLARE @Subscribers INT = 0

;WITH Yak (theMonth, PeopleJoined, PeopleLeft, Subscribers) AS ( SELECT TOP 2147483647 DATEADD(MONTH, theMonth, 0) AS theMonth, PeopleJoined, PeopleLeft, Subscribers FROM #Stage ORDER BY theMonth )

UPDATE Yak SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft OUTPUT inserted.theMonth, inserted.PeopleJoined, inserted.PeopleLeft, inserted.Subscribers
more ▼

answered Oct 13 '09 at 05:52 PM

Peso gravatar image

Peso
1.6k 5 6 8

Edit the CTE and add this WHERE clause

WHERE PeopleJoined > 0

To only get those months where people have joined.
Oct 14 '09 at 10:46 AM Peso
Peso, it is great to have the explanation in the comments. I loved the nice example of a use for the OUTPUT Clause
Oct 14 '09 at 10:49 AM Phil Factor
It is giving months in the future with me
Oct 14 '09 at 10:50 AM Phil Factor
Actually, with the peopleJoined>0 in place it would work because I've already said that there is no month without anyone joining.
Oct 14 '09 at 10:53 AM Phil Factor
Un-freakin-believable Peso! This proc is incredibly fast.
Oct 14 '09 at 11:24 AM RBarryYoung
(comments are locked)
10|1200 characters needed characters left

My solution was 3ms vs 17ms. I noticed that the data was missing after 2009-09-01 as well.

declare @t1 datetime2
declare @t2 datetime2
set @t1 = getdate()

select *, sum(joined_t-left_t) over (order by yr, mth rows between unbounded preceding and 0 following) as tot_subscribers  
from
(
 select coalesce(j.yr, l.yr) as yr, coalescE(j.mth, l.mth) as mth, coalesce(j.joined_total,0) as joined_t, coalesce(l.left_total,0) as left_t
 from
 (
 select year(datejoined) as yr, month(datejoined) as mth, count(1) as joined_total from
 registrations where datejoined is not null
 group by year(datejoined), month(datejoined)
 ) AS j
 full outer join 
 (
 select year(dateleft) as yr, month(dateleft) as mth, count(1) as left_total from
 registrations where dateleft is not null
 group by year(dateleft), month(dateleft)
 ) AS L
 on j.yr = l.yr and j.mth = l.mth
) AS A
order by yr, mth

set @t2 = getdate()
select DATEDIFF(MILLISECOND, @t1, @t2)
more ▼

answered Jan 26 '13 at 02:36 PM

yswai1986 gravatar image

yswai1986
1

(comments are locked)
10|1200 characters needed characters left

Posted on Kathi's blog, but worth contributing here...

Using a bit of abstraction by adding two computed columns (you'll want to index those two columns if testing the DDL/DML below), and without the 'Quirky Update', this runs ~2.5x faster on my w/s than Peso's solution (75-80ms vs 180ms)... and a DML option using the Quirky Update which improves the speed to ~65-70ms (~3x faster). While I don't know if adding the computed columns was allowed in the competition to abstract the YrMo, it's difficult to argue with the results, not to mention simplifying the code. :)

Les Cardwell


Add computed columns...

[YrMoJoined]  AS (CONVERT([int],CONVERT([varchar](4),datepart(year,[DateJoined]),(0))+case when datepart(month,[DateJoined])<(10) then '0'+CONVERT([char](2),datepart(month,[DateJoined]),(0)) else CONVERT([char](2),datepart(month,[DateJoined]),(0)) end,(0)))

[YrMoLeft]  AS (CONVERT([int],CONVERT([varchar](4),datepart(year,[DateLeft]),(0))+case when datepart(month,[DateLeft])<(10) then '0'+CONVERT([char](2),datepart(month,[DateLeft]),(0)) else CONVERT([char](2),datepart(month,[DateLeft]),(0)) end,(0)))

Add an index to each of those two columns.


------Standard Solution (75-80ms for 1mill rows)-------

DECLARE @begTime DATETIME

SELECT @begTime = GETDATE()

CREATE TABLE #subscriptions
    (YrMo int,
     Subscribed int,
     UnSubscribed int,
     Subscribers int
    )

INSERT INTO #subscriptions
SELECT TOP (100) PERCENT YrMoJoined AS YrMo, 
        COUNT(*), 
        (SELECT COUNT(*) FROM dbo.Registrations AS R2 WHERE YrMoLeft = R1.YrMoJoined),
        0
FROM  dbo.Registrations AS R1
GROUP BY R1.YrMoJoined
ORDER BY R1.YrMoJoined


SELECT  YrMo,
        Subscribed,
        UnSubscribed,
        (   SELECT SUM(S2.Subscribed)- SUM(S2.UnSubscribed) 
            FROM #subscriptions S2 
            WHERE S2.YrMo <= #subscriptions.YrMo
        ) AS Subscribers        
FROM #subscriptions
ORDER BY YrMo

DROP TABLE #subscriptions

SELECT @begTime, GETDATE(), DATEDIFF(ms,@begTime,GETDATE())

----------Quirky Update Option (65-70ms for 1mill rows)---------

DECLARE @begTime DATETIME

SELECT @begTime = GETDATE()

CREATE TABLE #subscriptions (YrMo int, Subscribed int, UnSubscribed int, Subscribers int )

INSERT INTO #subscriptions SELECT TOP (100) PERCENT YrMoJoined AS YrMo, COUNT(*), (SELECT COUNT(*) FROM dbo.Registrations AS R2 WHERE YrMoLeft = R1.YrMoJoined), 0 FROM dbo.Registrations AS R1 GROUP BY R1.YrMoJoined ORDER BY R1.YrMoJoined

DECLARE @subscribers INT SET @subscribers = 0 ; UPDATE #subscriptions SET @subscribers = Subscribers = @Subscribers + Subscribed - UnSubscribed FROM #subscriptions S2

SELECT YrMo, Subscribed, UnSubscribed, Subscribers

FROM #subscriptions ORDER BY YrMo

DROP TABLE #subscriptions

SELECT @begTime, GETDATE(), DATEDIFF(ms,@begTime,GETDATE())
more ▼

answered Feb 24 '10 at 06:45 PM

Les Cardwell gravatar image

Les Cardwell
1

(comments are locked)
10|1200 characters needed characters left

Heh... like I said, Peter... try running the Triangular Join on the million rows instead of just the final result set. The Triangular Join in conjunction with aggregations can be so crippling to a server that I won't even suggest it as a coded answer anymore in fear that someone might try it on a large rowset. Of course, you already knew that. The real magic is your original aggregation method... doing a running balance on 69 to 200 some odd rows won't show the danger of any method... not even the Triangular Join.

more ▼

answered Feb 20 '10 at 12:55 AM

Jeff Moden gravatar image

Jeff Moden
1.7k 3 8

(comments are locked)
10|1200 characters needed characters left

Jeff and all, here is a CURSOR-based solution which is all 100% "approved". It runs in about 550 ms on the million record sample set.

CREATE TABLE    #Stage
    	(
    		theMonth DATETIME NOT NULL,
    		PeopleJoined INT NOT NULL,
    		PeopleLeft INT NOT NULL,
    		Subscribers INT NOT NULL
    	)

DECLARE curYak CURSOR LOCAL FORWARD_ONLY READ_ONLY FOR
    	SELECT		u.theMonth,
    			SUM(CASE WHEN u.theCol = 'DateJoined' THEN u.Registrations ELSE 0 END) AS PeopleJoined,
    			SUM(CASE WHEN u.theCol = 'DateLeft' THEN u.Registrations ELSE 0 END) AS PeopleLeft
    	FROM		(               
    				SELECT		DATEDIFF(MONTH, 0, DateJoined) AS DateJoined,
    						DATEDIFF(MONTH, 0, DateLeft) AS DateLeft,
    						COUNT(*) AS Registrations
    				FROM		dbo.Registrations
    				GROUP BY	DATEDIFF(MONTH, 0, DateJoined),
    						DATEDIFF(MONTH, 0, DateLeft)
    			) AS d
    	UNPIVOT		(
    				theMonth
    				FOR theCol IN (d.DateJoined, d.DateLeft)
    			) AS u
    	GROUP BY	u.theMonth
    	HAVING		SUM(CASE WHEN u.theCol = 'DateJoined' THEN u.Registrations ELSE 0 END) > 0
    	ORDER BY	u.theMonth

DECLARE @Month INT,
    @Joined INT,
    @Left INT,
    @Subscribers INT = 0

OPEN    curYak

FETCH   NEXT
FROM    curYak
INTO    @Month,
    @Joined,
    @Left

WHILE @@FETCH_STATUS = 0
    BEGIN
    	SET	@Subscribers += @Joined - @Left

    	INSERT	#Stage
    		(
    			theMonth,
    			PeopleJoined,
    			PeopleLeft,
    			Subscribers
    		)
    	VALUES	(
    			DATEADD(MONTH, @Month, 0),
    			@Joined,
    			@Left,
    			@Subscribers
    		)

    	FETCH	NEXT
    	FROM	curYak
    	INTO	@Month,
    		@Joined,
    		@Left
    END

CLOSE   	curYak
DEALLOCATE  curYak

SELECT  	theMonth,
    	PeopleJoined,
    	PeopleLeft,
    	Subscribers
FROM    	#Stage
ORDER BY    theMonth

DROP TABLE  #Stage

And here is a "triangular join" solution which also is 100% "approved". It runs in about 450 ms on the million record sample set.

CREATE TABLE #Stage  (  theMonth INT NOT NULL,  PeopleJoined INT NOT NULL,  PeopleLeft INT NOT NULL,  Subscribers INT NOT NULL  )

INSERT #Stage ( theMonth, PeopleJoined, PeopleLeft, Subscribers ) SELECT u.theMonth, SUM(CASE WHEN u.theCol = 'DateJoined' THEN u.Registrations ELSE 0 END) AS PeopleJoined, SUM(CASE WHEN u.theCol = 'DateLeft' THEN u.Registrations ELSE 0 END) AS PeopleLeft, 0 AS Subscribers FROM (
SELECT DATEDIFF(MONTH, 0, DateJoined) AS DateJoined, DATEDIFF(MONTH, 0, DateLeft) AS DateLeft, COUNT(*) AS Registrations FROM dbo.Registrations GROUP BY DATEDIFF(MONTH, 0, DateJoined), DATEDIFF(MONTH, 0, DateLeft) ) AS d UNPIVOT ( theMonth FOR theCol IN (d.DateJoined, d.DateLeft) ) AS u GROUP BY u.theMonth HAVING SUM(CASE WHEN u.theCol = 'DateJoined' THEN u.Registrations ELSE 0 END) > 0

UPDATE tgt SET tgt.Subscribers = ( SELECT SUM(src.PeopleJoined - src.PeopleLeft) FROM #Stage AS src WHERE src.theMonth <= tgt.theMonth ) FROM #Stage AS tgt

SELECT DATEADD(MONTH, theMonth, 0) AS theMonth, PeopleJoined, PeopleLeft, Subscribers FROM #Stage ORDER BY theMonth

DROP TABLE #Stage
more ▼

answered Feb 08 '10 at 10:34 AM

Peso gravatar image

Peso
1.6k 5 6 8

With approved I mean it is ok with TheSQLGuru and others.
Feb 08 '10 at 10:35 AM Peso
(comments are locked)
10|1200 characters needed characters left
Your answer
toggle preview:

Up to 2 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

New code box

There's a new way to format code on the site - the red speech bubble logo will automatically format T-SQL for you. The original code box is still there for XML, etc. More details here.

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

SQL Server Central

Need long-form SQL discussion? SQLserverCentral.com is the place.

Topics:

x977
x341
x8
x7
x5

asked: Oct 11 '09 at 06:51 PM

Seen: 15985 times

Last Updated: Jan 28 '13 at 01:34 AM