This is a follow on from this question about handling large decimals in SQL 2000. I'm not looking for solutions - what I want to know is WHY it behaves in this way.

So to recap, the original question was how to perform this calculation in SQL 2000

```
select 100333601600192534100000300182947.0 / 97
```

as this results in

Arithmetic overflow error converting expression to data type numeric

Fatherjack came up with a solution using a lot of custom functions, and Oleg suggested a trick whereby he moved the decimal point around.

The calculation works fine in SQL 2005, producing the result

```
1034367026806108598969075259618.010309
```

Some background... the decimal datatype can store a maximum of 38 digits, and is defined as `decimal(precision,scale)`

where precision is the total number of digits, and scale is the number of digits to the right of the decimal point. Decimal datatype is limited by the rule (SQL 2000 | SQL 2005)

0 <= s <= p <= 38

Looking at the values in our calculation, we have

```
value precision scale
100333601600192534100000300182947.0 34 1
97 10 0
1034367026806108598969075259618.010309 37 6
```

all of which fit the rules!

However there are rules (SQL 2000 | SQL 2005) about the resulting datatype when decimal data types are used so if we take

e1 = 100333601600192534100000300182947.0

e2 = 97

and use the rule

e1/e2 : result precsion = p1 - s1 + s2 + max(6, s1 + p2 + 1)

result scale = max(6, s1 + p2 + 1)

we get

precision = 33 - 0 + 0 + max(6, 0 + 10 + 1)

= 33 + max(6, 11)

= 33 + 11

= 44scale = max(6, 0 + 10 + 1)

= max(6,11)

= 11

oh no, we have a precision of 44 and a scale of 11 - this breaks the rules

Ahah! BUT!

The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated. (BOL)

suggests that the result will be forced into a decimal(38,5)

So now the questions:

- why does this work in 2000 and not 2005, especially when the BOL pages between SQL 2000

and SQL 2005 contain the same info

what changed between the 2 versions, and where can I find the info!!!

if the result is forced into a decimal(38,5), why on SQL 2005 the result has a scale of 6

It feels like it is in the intermediate steps of the calculation where SQL 2005 will allow the precision to blow, as the final result is ok, but SQL 2000 has problems.

Any ideas?