This is a follow on from this question about handling large decimals in SQL 2000. I'm not looking for solutions  what I want to know is WHY it behaves in this way. So to recap, the original question was how to perform this calculation in SQL 2000
as this results in Arithmetic overflow error converting expression to data type numeric Fatherjack came up with a solution using a lot of custom functions, and Oleg suggested a trick whereby he moved the decimal point around. The calculation works fine in SQL 2005, producing the result
Some background... the decimal datatype can store a maximum of 38 digits, and is defined as
Looking at the values in our calculation, we have
all of which fit the rules! However there are rules (SQL 2000  SQL 2005) about the resulting datatype when decimal data types are used so if we take
and use the rule
we get
precision = 33  0 + 0 + max(6, 0 + 10 + 1)
scale = max(6, 0 + 10 + 1) oh no, we have a precision of 44 and a scale of 11  this breaks the rules Ahah! BUT!
suggests that the result will be forced into a decimal(38,5) So now the questions:
and SQL 2005 contain the same info
It feels like it is in the intermediate steps of the calculation where SQL 2005 will allow the precision to blow, as the final result is ok, but SQL 2000 has problems. Any ideas?
(comments are locked)

SQL Server 2005 never reduces the scale to less than 6 when it encounters a result that requires precision > 38. You might notice that the 'magic number' there (6) is the same as in the documented max expressions. There have been so many little bugs in this area, it's hard to remember whether this particular issue was fixed in 2005 RTM or later...I can't find the reference for the moment. The explicit minimum of 6 doesn't appear to be properly documented either, though it is alluded to in many places, including this KB article Paul Kev, I thought that you and I have figured it out at least to some degree. The only problem with 33 digits input is that because the scale wants to stay at 6, the 2000 engine does not see that it is actually possible that the number of significant digits can be less after division, and therefore it should be safe to apply the documented rule to reduce the scale accordingly. This is why we were able to shift 33 to the left leaving 32 integral part digits and 32 + 6 is no longer greater than 38, so it works. In the same fashion we would have to shift the 34 digits input 2 to the left etc.
Apr 29, 2010 at 01:12 PM
Oleg
(comments are locked)

Haven't a clue, but I'm contacting someone who might.
This might help: http://www.sqlservercentral.com/Forums/FindPost870258.aspx