There are the usual concurrency problems of "read a value, change it, store it
back without checking whether it already changed". The only thing special
about lifecycle happens at refcount 0, which should not happen when more than
one interpreter has a reference.
Similarly, C code can mess
On 6/17/2020 6:03 PM, Jeff Allen wrote:
On 17/06/2020 19:28, Eric V. Smith wrote:
On 6/17/2020 12:07 PM, Jeff Allen wrote:
If (1) interpreters manage the life-cycle of objects, and (2) a race
condition arises when the life-cycle or state of an object is
accessed by the interpreter that did
On 17/06/2020 19:28, Eric V. Smith wrote:
On 6/17/2020 12:07 PM, Jeff Allen wrote:
If (1) interpreters manage the life-cycle of objects, and (2) a race
condition arises when the life-cycle or state of an object is
accessed by the interpreter that did not create it, and (3) an object
will
On 6/17/2020 12:07 PM, Jeff Allen wrote:
On 12/06/2020 12:55, Eric V. Smith wrote:
On 6/11/2020 6:59 AM, Mark Shannon wrote:
Different interpreters need to operate in their own isolated address
space, or there will be horrible race conditions.
Regardless of whether that separation is done in
On 12/06/2020 12:55, Eric V. Smith wrote:
On 6/11/2020 6:59 AM, Mark Shannon wrote:
Different interpreters need to operate in their own isolated address
space, or there will be horrible race conditions.
Regardless of whether that separation is done in software or hardware,
it has to be done.
On 6/12/2020 2:17 PM, Chris Angelico wrote:
> On Sat, Jun 13, 2020 at 3:50 AM Edwin Zimmerman
> wrote:
>> My previous timings were slightly inaccurate, as they compared spawning
>> processes on Windows to forking on Linux. Also, I changed my timing code to
>> run all process synchronously, to
On Sat, Jun 13, 2020 at 3:50 AM Edwin Zimmerman wrote:
>
> My previous timings were slightly inaccurate, as they compared spawning
> processes on Windows to forking on Linux. Also, I changed my timing code to
> run all process synchronously, to avoid hitting resource limits.
>
> Updated
My previous timings were slightly inaccurate, as they compared spawning
processes on Windows to forking on Linux. Also, I changed my timing code to
run all process synchronously, to avoid hitting resource limits.
Updated Windows (Windows 7 this time, on a four core processor):
>>>
On Fri, Jun 12, 2020 at 7:19 AM Mark Shannon wrote:
> Hi Edwin,
>
> Thanks for providing some concrete numbers.
> Is it expected that creating 100 processes takes 6.3ms per process, but
> that creating 1000 process takes 40ms per process? That's over 6 times
> as long in the latter case.
>
>
Hi Eric,
On 12/06/2020 4:17 pm, Eric Snow wrote:
On Fri, Jun 12, 2020 at 2:49 AM Mark Shannon wrote:
The overhead largely comes from what you do with the process. The
additional cost of starting a new interpreter is the same regardless of
whether it is in the same process or not.
FWIW,
On Fri, Jun 12, 2020 at 2:49 AM Mark Shannon wrote:
> The overhead largely comes from what you do with the process. The
> additional cost of starting a new interpreter is the same regardless of
> whether it is in the same process or not.
FWIW, there's more to it than that:
* there is some
Hi Steve,
On 12/06/2020 12:43 pm, Steve Dower wrote:
On 12Jun2020 1008, Paul Moore wrote:
On Fri, 12 Jun 2020 at 09:47, Mark Shannon wrote:
Starting a new process is cheap. On my machine, starting a new Python
process takes under 1ms and uses a few Mbytes.
Is that on Windows or Unix?
On 12/06/2020 10:45, Mark Shannon wrote:
On 11/06/2020 2:50 pm, Riccardo Ghetta wrote:
On 11/06/2020 12:59, Mark Shannon wrote:
If the additional resource consumption is irrelevant, what's the
objection to spinning up a new processes?
The additional resource consumption of a new python
Hi Edwin,
Thanks for providing some concrete numbers.
Is it expected that creating 100 processes takes 6.3ms per process, but
that creating 1000 process takes 40ms per process? That's over 6 times
as long in the latter case.
Cheers,
Mark.
On 12/06/2020 11:29 am, Edwin Zimmerman wrote:
On
On 6/11/2020 6:59 AM, Mark Shannon wrote:
Hi Riccardo,
On 10/06/2020 5:51 pm, Riccardo Ghetta wrote:
Hi,
as an user, the "lua use case" is right what I need at work.
I realize that for python this is a niche case, and most users don't
need any of this, but I hope it will useful to understand
On 12Jun2020 1008, Paul Moore wrote:
On Fri, 12 Jun 2020 at 09:47, Mark Shannon wrote:
Starting a new process is cheap. On my machine, starting a new Python
process takes under 1ms and uses a few Mbytes.
Is that on Windows or Unix? Traditionally, process creation has been
costly on Windows,
On 6/12/2020 6:18 AM, Edwin Zimmerman wrote:
> On 6/12/2020 5:08 AM, Paul Moore wrote:
>> On Fri, 12 Jun 2020 at 09:47, Mark Shannon wrote:
>>> Starting a new process is cheap. On my machine, starting a new Python
>>> process takes under 1ms and uses a few Mbytes.
>> Is that on Windows or Unix?
On 6/12/2020 5:08 AM, Paul Moore wrote:
> On Fri, 12 Jun 2020 at 09:47, Mark Shannon wrote:
>> Starting a new process is cheap. On my machine, starting a new Python
>> process takes under 1ms and uses a few Mbytes.
> Is that on Windows or Unix? Traditionally, process creation has been
> costly on
On Fri, 12 Jun 2020 at 09:47, Mark Shannon wrote:
> Starting a new process is cheap. On my machine, starting a new Python
> process takes under 1ms and uses a few Mbytes.
Is that on Windows or Unix? Traditionally, process creation has been
costly on Windows, which is why threads, and in-process
On 11/06/2020 2:50 pm, Riccardo Ghetta wrote:
Hello Mark,
and thanks for your suggestions. However, I'm afraid I haven't explained
our use of python well enough.
On 11/06/2020 12:59, Mark Shannon wrote:
If you need to share objects across threads, then there will be
contention, regardless
In fairness, if the process is really exiting, the OS should clear that out.
Even if it is embedded, the embedding process could just release (or zero out)
the entire memory allocation. I personally like plugging those leaks, but it
does feel like putting purity over practicality.
Hello Mark,
and thanks for your suggestions. However, I'm afraid I haven't explained
our use of python well enough.
On 11/06/2020 12:59, Mark Shannon wrote:
If you need to share objects across threads, then there will be
contention, regardless of how many interpreters there are, or which
Hi Riccardo,
On 10/06/2020 5:51 pm, Riccardo Ghetta wrote:
Hi,
as an user, the "lua use case" is right what I need at work.
I realize that for python this is a niche case, and most users don't
need any of this, but I hope it will useful to understand why having
multiple independent
Eric V. Smith wrote:
> On 6/10/2020 8:33 AM, Mark Shannon wrote:
> > Hi Petr,
> > On 09/06/2020 2:24 pm, Petr Viktorin wrote:
> > On 2020-06-05 16:32, Mark Shannon wrote:
> > Whether Python interpreters run sequentially or in parallel, having
> > them work will enable a use case I would like to
Hi,
as an user, the "lua use case" is right what I need at work.
I realize that for python this is a niche case, and most users don't
need any of this, but I hope it will useful to understand why having
multiple independent interpreters in a single process can be an
essential feature.
The
> On 10 Jun 2020, at 14:33, Mark Shannon wrote:
>
> Hi Petr,
>
> On 09/06/2020 2:24 pm, Petr Viktorin wrote:
>> On 2020-06-05 16:32, Mark Shannon wrote:
>>> Hi,
>>>
>>> There have been a lot of changes both to the C API and to internal
>>> implementations to allow multiple interpreters in a
On 6/10/2020 8:33 AM, Mark Shannon wrote:
Hi Petr,
On 09/06/2020 2:24 pm, Petr Viktorin wrote:
On 2020-06-05 16:32, Mark Shannon wrote:
Whether Python interpreters run sequentially or in parallel, having
them work will enable a use case I would like to see: allowing me to
call Python code
On Wed, Jun 10, 2020 at 5:37 AM Mark Shannon wrote:
> By sharing an address space the separation is maintained by trust and
hoping that third party modules don't have too many bugs.
By definition, the use of any third-party module (or even the standard
library itself) is by trust and the hope
Hi Petr,
On 09/06/2020 2:24 pm, Petr Viktorin wrote:
On 2020-06-05 16:32, Mark Shannon wrote:
Hi,
There have been a lot of changes both to the C API and to internal
implementations to allow multiple interpreters in a single O/S process.
These changes cause backwards compatibility changes,
Hi,
I agree that embedding Python is an important use case and that we
should try to leak less memory and better isolate multiple
interpreters for this use case.
There are multiple projects to enhance code to make it work better
with multiple interpreters:
* convert C extension modules to
On 2020-06-10 04:43, Inada Naoki wrote:
On Tue, Jun 9, 2020 at 10:28 PM Petr Viktorin wrote:
Relatively recently, there is an effort to expose interpreter creation &
finalization from Python code, and also to allow communication between
them (starting with something rudimentary, sharing
On Tue, Jun 9, 2020 at 10:28 PM Petr Viktorin wrote:
>
> Relatively recently, there is an effort to expose interpreter creation &
> finalization from Python code, and also to allow communication between
> them (starting with something rudimentary, sharing buffers). There is
> also a push to
Petr, thanks for clearly stating your interests and goals for
subinterpreters. This lays to rest some of my own fears. I am still
skeptical that (even after the GIL is separated) they will enable
multi-core in ways that multiple processes couldn't handle just as well or
better, but your clear
33 matches
Mail list logo