[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Hi Frank, One difference is that, in order to carry out the instructions embodied in the first example, the computer has to perform arithmetic. And it adds the binary approximation of 1.1 (which is very slightly wrong) to the binary approximation of 2.2 (which, again, is very slightly wrong). It then prints the denary equivalent to the sum of those two which happens to be very slightly more than 3.3. When you type 3.3, it stores that as the binary approximation of 3.3 (which is y at that stage, to answer your question, and is, again, no surprise, also very slightly wrong) and then prints the denary equivalent of that binary approximation which happens to end up sufficiently close to 3.3 so as to cause it to print 3.3. I hope that helps. The experts around here (of which I am not one) may well point you to the decimal package which allows better handling of such arithmetic, if you are want the computer to behave more like you would want it to behave. Regards, Stephen Tucker. On Tue, Aug 28, 2018 at 3:11 PM, Frank Millman <frank at chagford.com> wrote: > Hi all > > I know about this gotcha - > > x = 1.1 + 2.2 >>>> x >>>> >>> 3.3000000000000003 > > According to the docs, the reason is that "numbers like 1.1 and 2.2 do not > have exact representations in binary floating point." > > So when I do this - > > y = 3.3 >>>> y >>>> >>> 3.3 > > what exactly is happening? What is 'y' at this point? > > Or if I do this - > > z = (1.1 + 2.2) * 10 / 10 >>>> z >>>> >>> 3.3 > > What makes it different from the first example? > > Thanks > > Frank Millman > > > > -- > https://mail.python.org/mailman/listinfo/python-list >

- Prev by Date:
**Question about floating point** - Next by Date:
**Question about floating point** - Previous by thread:
**Question about floating point** - Next by thread:
**Question about floating point** - Index(es):