Entering a very large number
On 27/03/2018 04:49, Richard Damon wrote:
> On 3/26/18 8:46 AM, bartc wrote:
>> Hence my testing with CPython 3.6, rather than on something like PyPy
>> which can give results that are meaningless. Because, for example,
>> real code doesn't repeatedly execute the same pointless fragment
>> millions of times. But a real context is too complicated to set up.
> The bigger issue is that these sort of micro-measurements aren't
> actually that good at measuring real quantitative performance costs.
> They can often give qualitative indications, but the way modern
> computers work, processing environment is extremely important in
> performance, so these sorts of isolated measure can often be misleading.
> The problem is that if you measure operation a, and then measure
> operation b, if you think that doing a then b in the loop that you will
> get a time of a+b, you will quite often be significantly wrong, as cache
> performance can drastically affect things. Thus you really need to do
> performance testing as part of a practical sized exercise, not a micro
> one, in order to get a real measurement.
That might apply to native code, where timing behaviour of a complicated
chip like x86 might be unintuitive.
But my comments were specifically about byte-code executed with CPython.
Then the behaviour is a level or two removed from the hardware and with
slightly different characteristics.
(Since the program you are actually executing is the interpreter, not
the Python program, which is merely data. And whatever aggressive
optimisations are done to the interpreter code, they are not affected by
the Python program being run.)