Roundup Robot added the comment:
New changeset 72a5ac909c7a by Richard Oudkerk in branch 'default':
Issue #18999: Make multiprocessing use context objects.
http://hg.python.org/cpython/rev/72a5ac909c7a
--
nosy: +python-dev
___
Python tracker
Lars Buitinck added the comment:
BTW, the context objects are singletons.
I haven't read all of your patch yet, but does this mean a forkserver will be
started regardless of whether it is later used?
That would be a good thing, since starting the fork server after reading in
large data sets
Richard Oudkerk added the comment:
I haven't read all of your patch yet, but does this mean a forkserver
will be started regardless of whether it is later used?
No, it is started on demand. But since it is started using
_posixsbuprocess.fork_exec(), nothing is inherited from the main
Lars Buitinck added the comment:
Ok, great.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18999
___
___
Python-bugs-list mailing list
Richard Oudkerk added the comment:
BTW, the context objects are singletons.
I could not see a sensible way to make ctx.Process be a picklable class (rather
than a method) if there can be multiple instances of a context type. This
means that the helper processes survive until the program
Richard Oudkerk added the comment:
Attached is a patch which allows the use of separate contexts. For example
try:
ctx = multiprocessing.get_context('forkserver')
except ValueError:
ctx = multiprocessing.get_context('spawn')
q = ctx.Queue()
p =
Lars Buitinck added the comment:
Ok. Do you (or jnoller?) have time to review my proposed patch, at least before
3.4 is released? I didn't see it in the release schedule, so it's probably not
planned soon, but I wouldn't want the API to change *again* in 3.5.
--
Richard Oudkerk added the comment:
I'll review the patch. (According to http://www.python.org/dev/peps/pep-0429/
feature freeze is expected in late November, so there is not too much of rush.)
--
___
Python tracker rep...@bugs.python.org
Changes by Lars Buitinck larsm...@gmail.com:
--
nosy: +jnoller
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18999
___
___
Python-bugs-list
Lars Buitinck added the comment:
I don't really see the benefit of a context manager over an argument. It's a
power user feature anyway, and context managers (at least to me) signal cleanup
actions, rather than construction options.
--
___
Python
Richard Oudkerk added the comment:
By context I did not really mean a context manager. I just meant an object
(possibly a singleton or module) which implements the same interface as
multiprocessing.
(However, it may be a good idea to also make it a context manager whose
__enter__() method
Olivier Grisel added the comment:
The process pool executor [1] from the concurrent futures API would be suitable
to explicitly start and stop the helper process for the `forkserver` mode.
[1]
http://docs.python.org/3.4/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor
Olivier Grisel added the comment:
Richard Oudkerk: thanks for the clarification, that makes sense. I don't have
the time either in the coming month, maybe later.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18999
Richard Oudkerk added the comment:
There are lots of things that behave differently depending on the currently set
start method: Lock(), Semaphore(), Queue(), Value(), ... It is not just when
creating a Process or Pool that you need to know the start method.
Passing a context or start_method
Olivier Grisel added the comment:
Maybe it would be better to have separate contexts for each start method.
That way joblib could use the forkserver context without interfering with the
rest of the user's program.
Yes in general it would be great if libraries could customize the
Changes by Lars Buitinck larsm...@gmail.com:
--
title: Allow multiple calls to multiprocessing.set_start_method - Robustness
issues in multiprocessing.{get,set}_start_method
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18999
Olivier Grisel added the comment:
Related question: is there any good reason that would prevent to pass a custom
`start_method` kwarg to the `Pool` constructor to make it use an alternative
`Popen` instance (that is an instance different from the
`multiprocessing._Popen` singleton)?
This
Richard Oudkerk added the comment:
With your patch, I think if you call get_start_method() without later calling
set_start_method() then the helper process(es) will never be started.
With the current code, popen.Popen() automatically starts the helper processes
if they have not already been
Changes by Lars Buitinck larsm...@gmail.com:
Removed file: http://bugs.python.org/file31721/mp_getset_start_method.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18999
___
Lars Buitinck added the comment:
Cleaned up the patch.
--
Added file: http://bugs.python.org/file31722/mp_getset_start_method.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18999
___
Lars Buitinck added the comment:
In my patched version, the private popen.get_start_method gets a kwarg
set_if_needed=True. popen.Popen calls that as before, so its behavior should
not change, while the public get_start_method sets the kwarg to False.
I realise now that this has the side
Richard Oudkerk added the comment:
In my patched version, the private popen.get_start_method gets a kwarg
set_if_needed=True. popen.Popen calls that as before, so its behavior
should not change, while the public get_start_method sets the kwarg to
False.
My mistake.
--
22 matches
Mail list logo