On 3/26/18 8:46 AM, bartc wrote:
On 26/03/2018 13:30, Richard Damon wrote:
On 3/26/18 6:31 AM, bartc wrote:

The purpose was to establish how such int("...") conversions compare in overheads with actual arithmetic with the resulting numbers.

Of course if this was done in C with a version that had builtin bignum ints or an aggresive enough optimizer (or a Python that did a similar level of optimizations) this function would just test the speed of starting the program, as it actually does nothing and can be optimized away.

Which is a nuisance. /You/ are trying to measure how long it takes to perform a task, the compiler is demonstrating how long it takes to /not/ perform it! So it can be very unhelpful.

Hence my testing with CPython 3.6, rather than on something like PyPy which can give results that are meaningless. Because, for example, real code doesn't repeatedly execute the same pointless fragment millions of times. But a real context is too complicated to set up.
The bigger issue is that these sort of micro-measurements aren't actually that good at measuring real quantitative performance costs. They can often give qualitative indications, but the way modern computers work, processing environment is extremely important in performance, so these sorts of isolated measure can often be misleading. The problem is that if you measure operation a, and then measure operation b, if you think that doing a then b in the loop that you will get a time of a+b, you will quite often be significantly wrong, as cache performance can drastically affect things. Thus you really need to do performance testing as part of a practical sized exercise, not a micro one, in order to get a real measurement.

 Yes, something like this can beused to measure the base time to do
something, but the real question should be is that time significant compared to the other things that the program is doing, Making a 200x improvement on code that takes 1% of the execution time saves you 0.995%, not normally worth it unless your program is currently running at 100.004% of the allowed (or acceptable) timing, if acceptable timing can even be defined that precisely.

I'm usually concerned with optimisation in a more general sense than a specific program.

Such a with a library function (where you don't know how it's going to be used); or with a particular byte-code in an interpreter (you don't know how often it will be encountered); or a generated code sequence in a compiler.

But even 200x improvement on something that takes 1% of the time can be worthwhile if it is just one of dozens of such improvements. Sometimes these small, incremental changes in performance can add up.

And even if it was just 1%, the aggregate savings across one million users of the program can be substantial, even if the individuals won't appreciate it. 1% extra battery life might be a handy five minutes for example.

Yes, but if you find where you are really spending your time, a similar effort may give significantly larger improvements.

--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to