## rounding

I want to round a number to the nearest integer.

Calculation is 41/104 * 52

This results in 20.5

If i do round(41/104) *52 which results

in 20

if i divide 41/104 and then take the result

0.3942307692307692307692307692307692307692307692307692307692 307692307692307692307692307692307692307692307692307692307692
307692307692307692307692307692307692307692307692307692307692 307692307692307692307692307692307692307692307692307692307692
307692307692307692307692307692307692307692307692307692307692 307692307692307692307692307692307692307692307692307692307692
307692307692307692307692307692307692307692307692307692307692 307692307692307692307692307692307692307692307692307692307692
307692307692307692307692307692307692307692307692307692307692 307692307692307692307692307692307692307692307692307692307692
30769230769230769

and put this into the ttcalc and multiply by 52 i get 20.499999999999999999999999999999999999999999999999999999999 999999999999999999999999999999999999999999999999999999999999 999999999999999999999999999999999999999999999999999999999999 999999999999999999999999999999999999999999999999999999999999 999999999999999999999999999999999999999999999999999999999999 999999999999999999999999999999999999999999999999999999999999 999999999999999999999999999999999999999999999999999999999999 999999999999999999999999999999999999999999999999999999999639 999999999999999999999999999999999999999999999999999999999999 999999999999999999999999999999999999999999999999999999999999 99999999999999988

so why does (41/104)*52 display 20.5 not 20.499999....etc ?

Hi, just reread my post and its a bit confusing.

1. If i calculate (41/104)*52 the calculator rounds up to 20.5

2. If i calculate 41/104 and then paste that result into the calc box and multiply it by 52 i get 20.4999999..

why is there a difference between calculation result in 1 and 2 ?

thanks, Paul

- decimal 41 is converted to binary 101001
- decimal 104 is converted to binary 1101000
- now division is made and the result is (96 bits mantissa): 0.0110010011101100010011101100010011101100010011101100010011 101100010011101100010011101100010011101 (the result is an approximation of the real value - we took only first 96 bits)
- the binary result is converted back to decimal: 0.3942307692307692307692307692

0.3942307692307692307692307692307692307692307692307692307692 307692307692307692307692307692307692307692307692307692307692 30769230769230769230769230769230769

So if you put 0.3942307692307692307... instead of 41/104 the result is a little different.

Here you have some more information.

Rational number arithmetic is not being used, so most quotients will produce infinite recurring digit strings
as with 1/3 in decimal. In base 2, 1/10 is also recurring: 0·000110011001100110011001100... Floating-point arithmetic
in binary is thus unsuitable for numbers representing dollars and cents, and different precisions (single or double
precision) can round unexpectedly. $10·15 in decimal is 1010·0010011001100110011 in 32-bit binary in the IBMPC
style (twenty-three bit mantissa) which value is in decimal exactly 10·1499996185302734375, but with double precision
(53-bit mantissa) the nearest binary representation of $10.15 is 1010·0010011001100110011001100110011001100110011001101
but 1010·001001100110011001100110011001100110011001100110011... recurring is the proper value and this time the
last bit has been rounded up so that the exact decimal value is 10·1500000000000003552713678800500929355621337890625.

Both values, if printed to two decimal digits, will round to give 10·15, but neither value in binary equals it
as it is unattainable in any finite length binary string, just as 2/3 in decimal. Tests such as a < b can be
puzzling if a and b have involved different precisions.

If rational number arithmetic were to be used, then 2/3, 1/10 and so forth would be represented exactly and a calculation such as (2/3)*3 would come out as 2, exactly. But addition and subtraction now cause problems for the p/q representation as the sizes of p and q increase very rapidly. Just try 99/100 + 98/101.

I omitted to mention further, that if a value such as 10·15 is printed with ONE decimal digit then the single precision representation will evoke 10·1 whereas the double precision value will evoke 10·2, because its representation exceeds 10·15 and so rounds up whereas the single precision form is below 10·15 and so does not round up. That was how I noticed it. A certain result was being printed with one decimal digit, and I was comparing the output from a single-precision run to that from a double precision run.