This is a follow on from this question about handling large decimals in SQL 2000. I'm not looking for solutions - what I want to know is WHY it behaves in this way.
So to recap, the original question was how to perform this calculation in SQL 2000
as this results in
Arithmetic overflow error converting expression to data type numeric
Fatherjack came up with a solution using a lot of custom functions, and Oleg suggested a trick whereby he moved the decimal point around.
The calculation works fine in SQL 2005, producing the result
Some background... the decimal datatype can store a maximum of 38 digits, and is defined as
Looking at the values in our calculation, we have
all of which fit the rules!
and use the rule
precision = 33 - 0 + 0 + max(6, 0 + 10 + 1)
scale = max(6, 0 + 10 + 1)
oh no, we have a precision of 44 and a scale of 11 - this breaks the rules
suggests that the result will be forced into a decimal(38,5)
So now the questions:
and SQL 2005 contain the same info
It feels like it is in the intermediate steps of the calculation where SQL 2005 will allow the precision to blow, as the final result is ok, but SQL 2000 has problems.
SQL Server 2005 never reduces the scale to less than 6 when it encounters a result that requires precision > 38. You might notice that the 'magic number' there (6) is the same as in the documented max expressions.
There have been so many little bugs in this area, it's hard to remember whether this particular issue was fixed in 2005 RTM or later...I can't find the reference for the moment.
The explicit minimum of 6 doesn't appear to be properly documented either, though it is alluded to in many places, including this KB article