Steven D'Aprano schrieb am 22.02.2018 um 11:59:

Thanks for sharing, Steven.

While it was already suggested between the lines in some of the replies,
I'd like to emphasise that the combination of timeit and result caching
(memoizing) in this post is misleading and not meaningful. It actually
shows a common mistake that easily happens when benchmarking code.

Since timeit repeats the benchmark runs and takes the minimum, it will
*always* return the time it takes to look up the final result in the cache,
and never report the actual performance of the code that is meant to be
benchmarked here. From what I read, this was probably not intended by the
author of the post.

Myself, I'm guilty as charged to run into this more than once and have seen
it in many occasions. I've even seen proper Cython benchmark code that a C
compiler can fully analyse as static and replaces by a constant, and then
get wonder speedups from it that would never translate to any real-world
gains. These things happen, but they are mistakes. All they tell us is that
we must always be careful when evaluating benchmarks and their results, and
to take good care that the benchmarks match the intended need, which is to
help us understand and solve a real-world performance problem [1].


[1] That's also the problem in the post above and in the original
benchmarks it refers to: there is no real-world problem to be solved here.
Someone wrote slow code and then showed that one language can evaluate that
slow code faster than another. Nothing to see here, keep walking...


Reply via email to