On 09/23/2014 07:48 AM, Steven D'Aprano wrote:
alas, the CUTOVER point is likely to be machine-dependent. Take it as a
given that inserting a fixed CUTOVER point into the source code (say,
``CUTOVER = 123456``) is not likely to be very effective, and dynamically
calculating it at import time is
On 24Sep2014 00:48, Steven D'Aprano
wrote:
I have a certain calculation which can be performed two radically different
ways. With the first algorithm, let's call it SHORT, performance is very
fast for small values of the argument, but terrible for large values. For
the second algorithm, LARGE,
Steven D'Aprano writes:
> ...
> *If* Python was a different language, I would spawn two threads, one using
> SHORT and the other using LARGE, then which ever completes first, I'd just
> kill the other. Alas, this won't work because (1) the GIL
The GIL does not prevent this scenario. The two threa
Once the runtime of SHORT starts to increase by a certain threshold,
Such as 2x, 4x, or 16x its last runtime? The other ideas already
proposed sound better, but I am wondering if it would work.
On Tue, Sep 23, 2014 at 12:21 PM, Terry Reedy wrote:
> On 9/23/2014 10:48 AM, Steven D'Aprano wrote:
>>
On 9/23/2014 10:48 AM, Steven D'Aprano wrote:
I have a certain calculation which can be performed two radically different
ways. With the first algorithm, let's call it SHORT, performance is very
fast for small values of the argument, but terrible for large values. For
the second algorithm, LARGE,
On 2014-09-23 15:48, Steven D'Aprano wrote:
I have a certain calculation which can be performed two radically different
ways. With the first algorithm, let's call it SHORT, performance is very
fast for small values of the argument, but terrible for large values. For
the second algorithm, LARGE, p
Add a timing harness and use a test interval (N) and call LARGE every
Nth loop until LARGE's timing is better than the prior SHORT's run.
Emile
On 09/23/2014 07:48 AM, Steven D'Aprano wrote:
I have a certain calculation which can be performed two radically different
ways. With the first algor
On Wed, Sep 24, 2014 at 12:48 AM, Steven D'Aprano
wrote:
> (3) SHORT starts off relatively speedy, significantly faster than LARGE for
> the first few tens of thousands of loops. I'm not talking about trivial
> micro-optimizations here, I'm talking about the difference between 0.1
> second for SHO
I have a certain calculation which can be performed two radically different
ways. With the first algorithm, let's call it SHORT, performance is very
fast for small values of the argument, but terrible for large values. For
the second algorithm, LARGE, performance is quite poor for small values,
but