x

Am I to assume there is a performance hit using UDFs compared to nested functions?

At the moment I have the following code that converts the number of seconds into hh:mm:ss.

CONVERT(varchar(10), DATEADD(ss, DurationSeconds, '00:00:00'), 108)

Messy but it works!

If I converted that to the following which is neater but then run it for thousands of rows will I see a performance hit?

dbo.ConvertSecondsToHMS(DurationSeconds)

The code I would use to create my function could be like so.

CREATE FUNCTION ConvertSecondsToHMS (  @Seconds int ) RETURNS varchar(10) AS BEGIN  RETURN CONVERT(varchar(10), DATEADD(ss, @Seconds, '00:00:00'), 108) END 
more ▼

asked May 14, 2010 at 04:56 AM in Default

Ian Roke gravatar image

Ian Roke
1.7k 30 33 34

Why did you ask this question? You know the answer is yes and in the time that it took you to type up the question you could have tested it yourself since you already have the function at hand. Sorry, just curious...
May 14, 2010 at 05:15 AM Scot Hauder
I don't have as good an understanding of SQL Server internals as some of the guys on here so I was wondering if the additional hit was worth it or not.
May 14, 2010 at 06:25 AM Ian Roke
(comments are locked)
10|1200 characters needed characters left

2 answers: sort oldest

Similar to what Scot said... The answer is yes.

However, you have to weigh the positive aspect of encapsulating code that you do not have to duplicate all over your codebase. If you are dealing with a low volume of data, the encapsulation is good so that future changes only need to be made in one place (if you find a bug in your code, with it encapsulated in your function, you only have to change it once).

The negative aspect, which usually is not noticed until you have an extremely large volume of data, is the hit on performance. This is what happened at my current employer and we have slowly started moving away from functions for some of the simpler tasks (like date/time conversions similar to yours) while remaining with functions for more complicated tasks.

We feel the benefit of keeping complicated code encapsulated instead of attempting to duplicate it in many areas of our code base is worthwhile. (And yes, copy and paste works until you fat finger it one day or until a developer attempts to improve performance and introduces a bug in 1 of the 20 locations that the complicated functionality is used and you spend hours trying to track down what/where things went wrong).

more ▼

answered May 14, 2010 at 09:11 AM

John Franklin gravatar image

John Franklin
414 1 1 3

(comments are locked)
10|1200 characters needed characters left

+1 to John - good answer.

Personally, I wouldn't sit on the side of encapsulation though - I would usually go for the more performant option on anything that had to deal with more than a few rows. Bear in mind I come from a large database background (think 6 billion rows and 750GB).

One thing though, there is a 'best of both worlds' solution - it just doesn't happen to exist yet. See here on connect.

The reason it is slow is because there is nothing in the meta-data which says to the query optimizer 'this scalar function just does a simple calculation' - so it effectively turns it into a hidden RBAR in case you are doing data lookups in your scalar function (which is a hugely bad practice anyway). I did blog about it on my old blog here. I will be moving that article to my new blog sometime in the future.

Dave B did point out to me that in that blog article I was experiencing the observer effect - in that by measuring the performance difference, I was actually making the performance difference worse. However, the performance penalty is still definitely present.

more ▼

answered May 14, 2010 at 09:40 AM

Matt Whitfield gravatar image

Matt Whitfield ♦♦
29.5k 61 65 87

That would be nice if the optimizer could recognize and inline the methods...as per your comment, what is the biggest challenge/mission critical problem with a 750GB database you have had to deal with? How do you create a test db? script out a subset of the data/restore a full copy to the SAN or another server etc?
May 14, 2010 at 12:54 PM Scot Hauder
Our test DB was just a data cleaned copy of the big database - but it only had about 300GB of data in it. The biggest challenge was making reports run quickly - so I devised a method whereby a background process went through the data pre-aggregating it and reports then ran off the pre-aggregated data. It worked pretty well because the clients were always impressed with reports chucking out data in 2 seconds from such a big data set...
May 15, 2010 at 05:21 AM Matt Whitfield ♦♦
(comments are locked)
10|1200 characters needed characters left
Your answer
toggle preview:

Up to 2 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

SQL Server Central

Need long-form SQL discussion? SQLserverCentral.com is the place.

Topics:

x347
x252
x27

asked: May 14, 2010 at 04:56 AM

Seen: 879 times

Last Updated: May 14, 2010 at 04:56 AM