[Python-Dev] Micro-benchmarks for function calls (PEP 576/579/580)
May be is something obvious but I find myself forgetting often about
the fact that most modern CPUs can change speed (and energy consumption)
depending on a moving average of CPU load.
If you don't disable this "green" feature and the benchmarks are quick then
result can have huge variations depending on exactly when and if the CPU
switches to fast mode.
On Wed, Jul 11, 2018 at 12:53 AM Victor Stinner <vstinner at redhat.com> wrote:
> The pyperformance benchmark suite had micro benchmarks on function
> calls, but I removed them because they were sending the wrong signal.
> A function call by itself doesn't matter to compare two versions of
> CPython, or CPython to PyPy. It's also very hard to measure the cost
> of a function call when you are using a JIT compiler which is able to
> inline the code into the caller... So I removed all these stupid
> "micro benchmarks" to a dedicated Git repository:
> Sometimes, I add new micro benchmarks when I work on one specific
> micro optimization.
> But more generally, I suggest you to not run micro benchmarks and
> avoid micro optimizations :-)
> 2018-07-10 0:20 GMT+02:00 Jeroen Demeyer <J.Demeyer at ugent.be>:
> > Here is an initial version of a micro-benchmark for C function calling:
> > https://github.com/jdemeyer/callbench
> > I don't have results yet, since I'm struggling to find the right options
> > "perf timeit" to get a stable result. If somebody knows how to do this,
> > is welcome.
> > Jeroen.
> > _______________________________________________
> > Python-Dev mailing list
> > Python-Dev at python.org
> > https://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe:
> > https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com
> Python-Dev mailing list
> Python-Dev at python.org
-------------- next part --------------
An HTML attachment was scrubbed...