On Mon, May 4, 2020 at 11:30 AM Eric Snow wrote:
> Further feedback is welcome, though I feel like the PR is ready (or
> very close to ready) for pronouncement. Thanks again to all.
FYI, after consulting with the steering council I've decided to change
the target release to 3.10, when we expect
Hi,
I wrote a "per-interpreter GIL" proof-of-concept: each interpreter
gets its own GIL. I chose to benchmark a factorial function in pure
Python to simulate a CPU-bound workload. I wrote the simplest possible
function just to be able to run a benchmark, to check if the PEP 554
would be relevant.
Just to be clear, this is executing the **same** workload in parallel, **not**
trying to parallelize factorial. E.g. the 8 CPU calculation is calculating
50,000! 8 separate times and not calculating 50,000! once by spreading the work
across 8 CPUs. This measurement is still showing parallel work
This sounds like a significant milestone!
Is there some kind of optimized communication possible yet between
subinterpreters? (Otherwise I still worry that it's no better than
subprocesses -- and it could be worse because when one subinterpreter
experiences a hard crash or runs out of memory, all
I'm seeing a drop in performance of both multiprocess and subinterpreter
based runs in the 8-CPU case, where performance drops by about half
despite having enough logical CPUs, while the other cases scale quite
well. Is there some issue with python multiprocessing/subinterpreters on
the same lo
On Tue, May 5, 2020 at 3:47 PM Guido van Rossum wrote:
>
> This sounds like a significant milestone!
>
> Is there some kind of optimized communication possible yet between
> subinterpreters? (Otherwise I still worry that it's no better than
> subprocesses -- and it could be worse because when on