The pyperformance benchmark suite had micro benchmarks on function
calls, but I removed them because they were sending the wrong signal.
A function call by itself doesn't matter to compare two versions of
CPython, or CPython to PyPy. It's also very hard to measure the cost
of a function call when you are using a JIT compiler which is able to
inline the code into the caller... So I removed all these stupid
"micro benchmarks" to a dedicated Git repository:
https://github.com/vstinner/pymicrobench

Sometimes, I add new micro benchmarks when I work on one specific
micro optimization.

But more generally, I suggest you to not run micro benchmarks and
avoid micro optimizations :-)

Victor

2018-07-10 0:20 GMT+02:00 Jeroen Demeyer <j.deme...@ugent.be>:
> Here is an initial version of a micro-benchmark for C function calling:
>
> https://github.com/jdemeyer/callbench
>
> I don't have results yet, since I'm struggling to find the right options to
> "perf timeit" to get a stable result. If somebody knows how to do this, help
> is welcome.
>
>
> Jeroen.
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to