Question about Decimal and rounding
On 27/04/18 10:21, Frank Millman wrote:
> Hi all
> I have an object which represents a Decimal type.
> It can receive input from various sources. It has to round the value to
> a particular scale factor before storing it. The scale factor can vary,
> so it has to be looked up every time, which is a slight overhead. I
> thought I could speed it up a bit by checking first to see if the value
> has any decimal places. If not, I can skip the scaling routine.
> This is how I do it -
> ?? s = str(value)
> ?? if '.' in s:
> ?????? int_portion, dec_portion = s.split('.')
> ?????? is_integer = (int(int_portion) == value)
> ?? else:
> ?????? is_integer = True
> It assumes that the value is in the form iii.ddd or just iii. Today I
> found the following value -
> ?? -1.4210854715202004e-14
> which does not match my assumption.
> It happens to work, but I don't understand enough about the notation to
> know if this is reliable, or if there are any corner cases where my test
> would fail.
This is not reliable. Decimal is happy to give you integers in
scientific notation if this is needed to keep track of the number of
>>> Decimal('13.89e3') == 13890
It appears to me that the "obvious" way to check whether a Decimal
number is an integer is simply:
>>> d1 = Decimal('1.1')
>>> d2 = Decimal('3')
>>> int(d1) == d1
>>> int(d2) == d2
* Beware of spending too much time on premature optimisations that you
might not need.
* Are you *absolutely* sure that you can *always* skip your rounding
step for all integers? There's *no* risk of this optimisation ruining
some future dataset?