### osdir.com

```On 2018-08-28, Frank Millman <frank at chagford.com> wrote:

>>>> x = 1.1 + 2.2
>>>> x
> 3.3000000000000003
>
> According to the docs, the reason is that "numbers like 1.1 and 2.2 do not
> have exact representations in binary floating point."

Right.

> So when I do this -
>
>>>> y = 3.3
>>>> y
> 3.3
>
> what exactly is happening?

By default, Python shows only a certain number of significant digits
(17?), and the decimal value of y rounded to 17 places is 3.3.

> What is 'y' at this point?

If you want to see the exact value:

>>> y = 3.3
>>> y.hex()
'0x1.a666666666666p+1'

Or in decimal, you can request more significant digits than default:

>>> '%0.60f' % y
'3.299999999999999822364316059974953532218933105468750000000000'

> Or if I do this -
>
>>>> z = (1.1 + 2.2) * 10 / 10
>>>> z
> 3.3

>>> z = (1.1 + 2.2) * 10 / 10
>>> '%0.60f' % y
'3.300000000000000266453525910037569701671600341796875000000000'
>>> z.hex()
'0x1.a666666666666p+1'

>>> y = 1.1 + 2.2
>>> '%0.60f' % z
'3.299999999999999822364316059974953532218933105468750000000000'
>>> y.hex()
'0x1.a666666666667p+1'

As you can see from the hex values, they differ by one (least
significant) bit in the significand.

> What makes it different from the first example?

The additional multiplication and division operations are not exact,
so "* 10 / 10" produces a result that's slightly different than the
one you started with.

Two things to remember:

1. The actual numbers in the computer are in base-2.  The mapping
between what you see in base-10 and the real values in base-2
_is_always_exact_.