Oscar Esteban <oeste...@stanford.edu> added the comment:

Thanks for your response.

The idea would be to enable ``subprocess.Popen`` to use an existing fork server 
in its fork_exec.

The rationale: I can start a pool of n workers very early in the execution 
flow. They will have ~350MB memory fingerprint in the beginning and they will 
be reset to that every ``maxtasksperchild``. So this is basically the amount of 
VM allocated (doubled) when forking. Pretty small.

Currently, as the fork is done from some process with all the python stack of 
the app loaded in memory (1.7GB in our case), then some additional 1.7GB of VM 
are allocated on each fork. This could be avoided if the fork was done from the 
forkserver pool.

As you mention, we have been considering such a "shell" server on top of 
asyncio, so your response just confirms our intuition.

I'll close this idea for now since I agree that any investment on this problem 
should be directed to the asyncio solution.

Please note that the idea proposed would work for Python < 3 (as opposed to 
anything based on asyncio).

----------
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue35238>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to