On 26/03/2018 16:31, Chris Angelico wrote:
On Mon, Mar 26, 2018 at 11:46 PM, bartc <b...@freeuk.com> wrote:
On 26/03/2018 13:30, Richard Damon wrote:

On 3/26/18 6:31 AM, bartc wrote:


The purpose was to establish how such int("...") conversions compare in
overheads with actual arithmetic with the resulting numbers.

Of course if this was done in C with a version that had builtin bignum
ints or an aggresive enough optimizer (or a Python that did a similar level
of optimizations) this function would just test the speed of starting the
program, as it actually does nothing and can be optimized away.


Which is a nuisance. /You/ are trying to measure how long it takes to
perform a task, the compiler is demonstrating how long it takes to /not/
perform it! So it can be very unhelpful.

Yeah. It's so annoying that compilers work so hard to make your code
fast, when all you want to do is measure exactly how slow it is.
Compiler authors are stupid.

In some ways, yes they are. If they were in charge of Formula 1 pre-race speed trials, all cars would complete the circuit in 0.00 seconds with an average speed of infinity mph.

Because they can see that they all start and end at the same point so there is no reason to actually go around the track.

And in actual computer benchmarks, the compilers are too stupid to realise that this is not real code doing a useful task that is being done, and the whole thing is optimised away as being pointless.

Optimisation is a very bad idea on microbenchmarks if the results are going to be misleading.


--
bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to