On Fri, 23 Feb 2018 11:00:28 +0100, Stefan Behnel wrote:

> I've even seen proper Cython benchmark code that a C compiler can fully
> analyse as static and replaces by a constant, and then get wonder
> speedups from it that would never translate to any real-world gains.

This is one of the reasons why typical benchmarks are junk. If you're not 
benchmarking actual, real-world code you care about, you have no idea how 
the language will perform with actual, real-world code.

There is a popular meme that "Python is slow" based on the results of 
poorly-written code. And yet, in *real world code* that people actually 
use, Python's performance is not that bad -- especially when you play to 
its strengths, rather than its weaknesses, and treat it as a glue 
language for stitching together high-performance libraries written in 
high-performance languages like C, Fortran, Rust and Java.

Speaking of bad benchmarks, I recall an anecdote about a programmer 
running some benchmarks on Fortran code on various mainframes, and found 
that one specific machine was millions of times faster than the others.

It turns out that the code being tested was something like this 
(translated into Python):

for i in range(1, 1000000):
    pass


and the Fortan compiler on that one machine was clever enough to treat it 
as dead code and remove it, turning the entire benchmark into a single 
no-op.

Your point that we must consider benchmarkers carefully is a good one.


-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to