[issue18174] Make regrtest with --huntrleaks check for fd leaks
Richard Oudkerk added the comment: I can't remember why I did not use fstat() -- probably it did not occur to me. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18174 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14953] Reimplement subset of multiprocessing.sharedctypes using memoryview
Richard Oudkerk added the comment: Updated version of the patch. Still needs docs. -- Added file: http://bugs.python.org/file35902/memoryview-array-value.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14953 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10850] inconsistent behavior concerning multiprocessing.manager.BaseManager._Server
Changes by Richard Oudkerk shibt...@gmail.com: -- assignee: - sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10850 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9248] multiprocessing.pool: Proposal: waitforslot
Richard Oudkerk added the comment: Since there are no new features added to Python 2, this would be a Python 3 only feature. I think for Python 3 it is better to concentrate on developing concurrent.futures rather than multiprocessing.Pool. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9248 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21779] test_multiprocessing_spawn fails when ran with -Werror
Changes by Richard Oudkerk shibt...@gmail.com: -- assignee: - sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21779 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21664] multiprocessing leaks temporary directories pymp-xxx
Changes by Richard Oudkerk shibt...@gmail.com: -- assignee: - sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21664 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20147] multiprocessing.Queue.get() raises queue.Empty exception if even if an item is available
Changes by Richard Oudkerk shibt...@gmail.com: -- assignee: - sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20147 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21372] multiprocessing.util.register_after_fork inconsistency
Richard Oudkerk added the comment: register_after_fork() is intentionally undocumented and for internal use. It is only run when starting a new process using the fork start method whether on Windows or not -- the fork in its name is a hint. -- resolution: - not a bug stage: - resolved status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21372 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1191964] asynchronous Subprocess
Richard Oudkerk added the comment: If you use the short timeouts to make the wait interruptible then you can use waitformultipleobjects (which automatically waits on an extra event object) instead of waitforsingleobject. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1191964 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1191964] asynchronous Subprocess
Richard Oudkerk added the comment: I added some comments. Your problem with lost data may be caused by the fact you call ov.cancel() and expect ov.pending to tell you whether the write has/will succeed. Instead you should use ov.getresult() and expect either success or an aborted error. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1191964 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1191964] asynchronous Subprocess
Richard Oudkerk added the comment: Can you explain why you write in 512 byte chunks. Writing in one chunk should not cause a deadlock. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1191964 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21162] code in multiprocessing.pool freeze if inside some code from scikit-learn (and probably liblinear) executed on ubuntu 12.04 64 Bit
Richard Oudkerk added the comment: I would guess that the problem is simply that LogisticRegression objects are not picklable. Does the problem still occur if you do not use freeze? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21162 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21162] code in multiprocessing.pool freeze if inside some code from scikit-learn (and probably liblinear) executed on ubuntu 12.04 64 Bit
Richard Oudkerk added the comment: Ah, I misunderstood: you meant that it freezes/hangs, not that you used a freeze tool. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21162 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21162] code in multiprocessing.pool freeze if inside some code from scikit-learn (and probably liblinear) executed on ubuntu 12.04 64 Bit
Richard Oudkerk added the comment: Could you try pickling and unpickling the result of func(): import cPickle data = cPickle.dumps(func([1,2,3]), -1) print cPickle.loads(data) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21162 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1191964] asynchronous Subprocess
Richard Oudkerk added the comment: I would recommended using _overlapped instead of _winapi. I intend to move multiprocessing over in future. Also note that you can do nonblocking reads by starting an overlapped read then cancelling it immediately if it fails with incomplete. You will need to recheck if it completes anyway because of a race. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1191964 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21116] Failure to create multiprocessing shared arrays larger than 50% of memory size under linux
Richard Oudkerk added the comment: Using truncate() to zero extend is not really portable: it is only guaranteed on XSI-compliant POSIX systems. Also, the FreeBSD man page for mmap() has the following warning: WARNING! Extending a file with ftruncate(2), thus creating a big hole, and then filling the hole by modifying a shared mmap() can lead to severe file fragmentation. In order to avoid such fragmentation you should always pre-allocate the file's backing store by write()ing zero's into the newly extended area prior to modifying the area via your mmap(). The fragmentation problem is especially sensitive to MAP_NOSYNC pages, because pages may be flushed to disk in a totally random order. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21116 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1191964] asynchronous Subprocess
Richard Oudkerk added the comment: Using asyncio and the IOCP eventloop it is not necessary to use threads. (Windows may use worker threads for overlapped IO, but that is hidden from Python.) See https://code.google.com/p/tulip/source/browse/examples/child_process.py for vaguely expect-like interaction with a child python process which works on Windows. It writes commands to stdin, and reads results/tracebacks from stdout/stderr. Of course, it is also possible to use overlapped IO directly. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1191964 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21078] multiprocessing.managers.BaseManager.__init__'s serializer argument is not documented
Richard Oudkerk added the comment: No, the argument will not go away now. However, I don't much like the API which is perhaps why I did not get round to documenting it. It does have tests. Currently 'xmlrpclib' is the only supported alternative, but JSON support could be added quite easily. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21078 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20990] pyflakes: undefined names, get_context() and main(), in multiprocessing
Richard Oudkerk added the comment: Testing the is_forking() requires cx_freeze or something similar, so it really cannot go in the test suite. I have tested it manually (after spending too long trying to get cx_freeze to work with a source build). It should be noted that on Unix freezing is currently only compatible with the default 'fork' start method. -- resolution: - fixed stage: - committed/rejected status: open - closed type: - behavior ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20990 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7503] multiprocessing AuthenticationError digest sent was rejected when pickling proxy
Richard Oudkerk added the comment: For reasons we all know unpickling unauthenticated data received over TCP is very risky. Sending an unencrypted authentication key (as part of a pickle) over TCP would make the authentication useless. When a proxy is pickled the authkey is deliberately dropped. When the proxy is unpickled the authkey used for the reconstructed proxy is current_process().authkey. So you can fix the example by setting the current_process().authkey to match the one used by the manager: import multiprocessing from multiprocessing import managers import pickle class MyManager(managers.SyncManager): pass def client(): mgr = MyManager(address=(localhost,2288),authkey=12345) mgr.connect() l = mgr.list() multiprocessing.current_process().authkey = 12345# --- HERE l = pickle.loads(pickle.dumps(l)) def server(): mgr = MyManager(address=(,2288),authkey=12345) mgr.get_server().serve_forever() server = multiprocessing.Process(target=server) client = multiprocessing.Process(target=client) server.start() client.start() client.join() server.terminate() server.join() In practice all processes using the manager should have current_process().authkey set to the same value. I don't claim that multiprocessing supports distributed computing very well, but as far as I can see, things are working as intended. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7503 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20633] SystemError: Parent module 'multiprocessing' not loaded, cannot perform relative import
Changes by Richard Oudkerk shibt...@gmail.com: -- resolution: - fixed stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20633 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20980] In multiprocessing.pool, ExceptionWithTraceback should derive from Exception
Changes by Richard Oudkerk shibt...@gmail.com: -- resolution: - fixed stage: test needed - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20980 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20990] pyflakes: undefined names, get_context() and main(), in multiprocessing
Changes by Richard Oudkerk shibt...@gmail.com: -- assignee: - sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20990 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20980] In multiprocessing.pool, ExceptionWithTraceback should derive from Exception
Richard Oudkerk added the comment: We should only wrap the exception with ExceptionWithTraceback in the process case where it will be pickled and then unpickled. -- assignee: - sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20980 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20854] multiprocessing.managers.Server: problem with returning proxy of registered object
Richard Oudkerk added the comment: I am not sure method_to_typeid and create_method were really intended to be public -- they are only used by Pool proxies. You can maybe work around the problem by registering a second typeid without specifying callable. That can be used in method_to_typeid: import multiprocessing.managers class MyClass(object): def __init__(self): self._children = {} def get_child(self, i): return self._children.setdefault(i, type(self)()) def __repr__(self): return 'MyClass %r' % self._children class MyManager(multiprocessing.managers.BaseManager): pass MyManager.register('MyClass', MyClass, method_to_typeid = {'get_child': '_MyClass'}) MyManager.register('_MyClass', method_to_typeid = {'get_child': '_MyClass'}, create_method=False) if __name__ == '__main__': m = MyManager() m.start() try: a = m.MyClass() b = a.get_child(1) c = b.get_child(2) d = c.get_child(3) print a # MyClass {1: MyClass {2: MyClass {3: MyClass { finally: m.shutdown() -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20854 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20633] SystemError: Parent module 'multiprocessing' not loaded, cannot perform relative import
Changes by Richard Oudkerk shibt...@gmail.com: -- assignee: - sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20633 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7503] multiprocessing AuthenticationError digest sent was rejected when pickling proxy
Changes by Richard Oudkerk shibt...@gmail.com: -- assignee: - sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7503 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20660] Starting a second multiprocessing.Manager causes INCREF on all object created by the first one.
Richard Oudkerk added the comment: Thanks Richard. The set_start_method() call will affect any process started from that time on? Is it possible to change idea at some point in the future? You can use different start methods in the same program by creating different contexts: spawn_ctx = multiprocessing.get_context('spawn') manager = spawn_ctx.Manager() Anyway, I cannot upgrade right now. Would it be an option to subclass BaseProxy, override the _after_fork method so it will do nothing upon forking? That would probably mean that proxy objects could not be inherited by *any* sub-process. (You would then need to use some other way of sharing access between processes and to manage the lifetime of the shared object.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20660 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20660] Starting a second multiprocessing.Manager causes INCREF on all object created by the first one.
Richard Oudkerk added the comment: On Unix, using the fork start method (which was the only option till 3.4), every sub process will incref every shared object for which its parent has a reference. This is deliberate because there is not really any way to know which shared objects a subprocess might use. (On Windows where only things pickled as part of the process object are inherited by the child process, we can know exactly which shared objects the child process should incref.) Typical programs will only have a single manager (or a very small number) but may have a large number of normal processes (which will also do the increfing). I do not think that this is worth trying to fix, particularly as it can cause compatibility problems. For 3.4 you can use the spawn or forkserver start methods instead. import multiprocessing, logging objs = [] def newman(n=50): m = multiprocessing.Manager() print('created') for i in range(n): objs.append(m.Value('i',i)) return m def foo(): pass if __name__ == '__main__': ## Try uncommenting next line with Python 3.4 # multiprocessing.set_start_method('spawn') multiprocessing.log_to_stderr(logging.DEBUG) print(' first man') m1 = newman() print(' starting foo') p = multiprocessing.Process(target=foo) p.start() p.join() -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20660 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20540] Python 3.3/3.4 regression in multiprocessing manager ?
Richard Oudkerk added the comment: LGTM -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20540 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20540] Python 3.3/3.4 regression in multiprocessing manager ?
Richard Oudkerk added the comment: BTW, I see little difference between 3.2 and the unpatched default branch on MacOSX: $ py-32/release/python.exe ~/Downloads/test_manager.py 0.0007331371307373047 8.20159912109375e-05 9.417533874511719e-05 8.082389831542969e-05 7.796287536621094e-05 0.00011587142944335938 0.00011396408081054688 7.891654968261719e-05 8.392333984375e-05 7.605552673339844e-05 10 $ time py-default/release/python.exe ~/Downloads/test_manager.py 0.0007359981536865234 0.0001289844512939453 0.00018715858459472656 0.00015497207641601562 0.00012087821960449219 0.00013399124145507812 0.00011992454528808594 0.00011587142944335938 0.00010895729064941406 0.00017499923706054688 10 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20540 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20527] multiprocessing.Queue deadlocks after “reader” process death
Richard Oudkerk added the comment: This is expected. Killing processes which use shared locks is never going to end well. Even without the lock deadlock, the data in the pipe would be liable to be corrupted if a processes is killed while putting or getting from a queue. If you want to be able to reliably recover when a related process dies then you would be better off using one-to-one pipes for comunication -- although that would probably mean substantial redesign. -- resolution: - wont fix stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20527 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20414] Python 3.4 has two Overlapped types
Richard Oudkerk added the comment: _overlapped is linked against the socket library whereas _winapi is not so it can be bundled in with python3.dll. I did intend to switch multiprocessing over to using _overlapped but I did not get round to it. Since this is a private module the names of methods do not matter to much. Note that getresult and GetOverlappedResult return values in different forms. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20414 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20153] New-in-3.4 weakref finalizer doc section is already out of date.
Richard Oudkerk added the comment: The following from the docs is wrong: ... module globals are no longer forced to None during interpreter shutdown. Actually, in 3.4 module globals *sometimes* get forced to None during interpreter shutdown, so the version the __del__ method can still raise an exception. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20153 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20114] Sporadic failure of test_semaphore_tracker() of test_multiprocessing_forkserver on FreeBSD 9 buildbot
Richard Oudkerk added the comment: It is probably harmless then. I don't think increasing the timeout is necessary -- the multiprocessing tests already take a long time. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20114 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20114] Sporadic failure of test_semaphore_tracker() of test_multiprocessing_forkserver on FreeBSD 9 buildbot
Richard Oudkerk added the comment: How often has this happened? If the machine was very loaded then maybe the timeout was not enough time for the semaphore to be cleaned up by the tracker process. But I would expect 1 second to be more than ample. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20114 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19946] Handle a non-importable __main__ in multiprocessing
Richard Oudkerk added the comment: On 19/12/2013 10:00 pm, Nick Coghlan wrote: I think that needs to be fixed on the multiprocessing side rather than just in the tests - we shouldn't create a concrete context for a start method that isn't going to work on that platform. Finding that kind of discrepancy was part of my rationale for basing the skips on the available contexts (although my main motivation was simplicity). There may also be docs implications in describing which methods are supported on different platforms (although I haven't looked at how that is currently documented). If by concrete context you mean _concrete_contexts['forkserver'], then that is supposed to be private. If you write ctx = multiprocessing.get_context('forkserver') then this will raise ValueError if the forkserver method is not available. You can also use 'forkserver' in multiprocessing.get_all_start_methods() to check if it is available. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19946 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19946] Handle a non-importable __main__ in multiprocessing
Richard Oudkerk added the comment: Thanks for your hard work Nick! -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19946 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19946] Handle a non-importable __main__ in multiprocessing
Richard Oudkerk added the comment: So there are really two situations: 1) The __main__ module *should not* be imported. This is the case if you use __main__.py in a package or if you use nose to call test_main(). This should really be detected in get_preparation_data() in the parent process so that import_main_path() does not get called in the child process. 2) The __main__ module *should* be imported but it does not have a .py extension. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19946 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19946] Handle a non-importable __main__ in multiprocessing
Richard Oudkerk added the comment: I appear to be somehow getting child processes where __main__.__file__ is set, but __main__.__spec__ is not. That seems to be true for the __main__ module even when multiprocessing is not involved. Running a file /tmp/foo.py containing import sys print(sys.modules['__main__'].__spec__, sys.modules['__main__'].__file__) I get output None /tmp/foo.py I am confused by why you would ever want to load by module name rather than file name. What problem would that fix? If the idea is just to support importing a main module without a .py extension, isn't __file__ good enough? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19946 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19946] Have multiprocessing raise ImportError when spawning a process that can't find the main module
Richard Oudkerk added the comment: I guess this is a case where we should not be trying to import the main module. The code for determining the path of the main module (if any) is rather crufty. What is sys.modules['__main__'] and sys.modules['__main__'].__file__ if you run under nose? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19946 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19864] multiprocessing Proxy docs need locking semantics explained
Richard Oudkerk added the comment: From what I remember a proxy method will be thread/process-safe if the referent's corresponding method is thread safe. It should certainly be documented that the exposed methods of a proxied object should be thread-safe. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19864 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18885] handle EINTR in the stdlib
Richard Oudkerk added the comment: I've always had an implicit understanding that calls with timeouts may, for whatever reason, return sooner than requested (or later!), and the most careful approach is to re-check the clock again. I've always had the implicit understanding that if I use an *infinite* timeout then the call will not timeout. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18885 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19740] test_asyncio problems on 32-bit Windows
Richard Oudkerk added the comment: Could you try this patch? -- keywords: +patch Added file: http://bugs.python.org/file32822/wait-for-handle.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19740 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19740] test_asyncio problems on 32-bit Windows
Richard Oudkerk added the comment: Possibly related: ... That looks unrelated since it does not involve wait_for_handle(). Unfortunately test_utils.run_briefly() offers few guarantees when using the IOCP event loop. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19740 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19740] test_asyncio problems on 32-bit Windows
Richard Oudkerk added the comment: It would be nice to try this on another Vista machine - the WinXP, Win7, Windows Server 2003 and Windows Server 2008 buildbots don't seem to show this failure. It looks as though the TimerOrWaitFired argument passed to the callback registered with RegisterWaitForSingleObject() is wrong. This might be fixable by doing an additional zero-timeout wait with WaitForSingleObject() to test whether the handle is signalled. (But this will prevent us from using wait_for_handle() with things like locks and semaphores where a succesful wait changes the state of the object represented by the handle.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19740 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19599] Failure of test_async_timeout() of test_multiprocessing_spawn: TimeoutError not raised
Changes by Richard Oudkerk shibt...@gmail.com: -- resolution: - fixed status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19599 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19564] test_context() of test_multiprocessing_spawn hangs on x86 Gentoo Non-Debug 3.x buildbot
Richard Oudkerk added the comment: I don't think the patch to the _test_multiprocessing will work. It defines cls._Popen but I don't see how that would be used by cls.Pool to start the processes. I will have a think about a fix. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19564 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13090] test_multiprocessing: memory leaks
Richard Oudkerk added the comment: If the result of os.read() was stored in a Python daemon thread, the memory should be released since the following changeset. Can someone check if this issue still exist? If a daemon thread is killed while it is blocking on os.read() then the bytes object used as the read buffer will never be decrefed. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13090 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16998] Lost updates with multiprocessing.Value
Changes by Richard Oudkerk shibt...@gmail.com: -- resolution: - fixed stage: - committed/rejected status: open - closed type: behavior - ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16998 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19338] multiprocessing: sys.exit() from a child with a non-int exit code exits with 0
Richard Oudkerk added the comment: Thanks for the patches. Fixed in 7aabbe919f55, 11cafbe6519f. -- resolution: - fixed stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19338 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19599] Failure of test_async_timeout() of test_multiprocessing_spawn: TimeoutError not raised
Richard Oudkerk added the comment: Hopefully the applied change will fix failure (or at least make this much less likey). -- resolution: - fixed stage: - committed/rejected status: open - closed type: - behavior ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19599 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19575] subprocess.Popen with multiple threads: Redirected stdout/stderr files still open after process close
Richard Oudkerk added the comment: Note that on Windows if you redirect the standard streams then *all* inheritable handles are inherited by the child process. Presumably the handle for f_w file object (and/or a duplicate of it) created in one thread is accidentally leaked to the other child process. This means that shutil.rmtree() cannot succeed until *both* child processes have exited. PEP 446 might fix this, although there will still be a race condition. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19575 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19565] test_multiprocessing_spawn: RuntimeError and assertion error on windows xp buildbot
Richard Oudkerk added the comment: If you have a pending overlapped operation then the associated buffer should not be deallocated until that operation is complete, or else you are liable to get a crash or memory corruption. Unfortunately WinXP provides no reliable way to cancel a pending operation -- there is CancelIo() but that just cancels operations started by the *current thread* on a handle. Vista introduced CancelIoEx() which allows cancellation of a specific overlapped op. These warnings happen in the deallocator because the buffer has to be freed. For Vista and later versions of Windows these warnings are presumably unnecessary since CancelIoEx() is used. For WinXP the simplest thing may be to check if Py_Finalize is non-null and if so suppress the warning (possibly leaking the buffer since we are exiting anyway). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19565 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19565] test_multiprocessing_spawn: RuntimeError and assertion error on windows xp buildbot
Richard Oudkerk added the comment: As close() on regular files, I would prefer to call explicitly cancel() to control exactly when the overlapped operation is cancelled. If you use daemon threads then you have no guarantee that the thread will ever get a chance to explicitly call cancel(). Can't you fix multiprocessing and/or the unit test to ensure that all overlapped operations are completed or cancelled? On Vista and later, yes, this is done in the deallocator using CancelIoEx(), although there is still a warning. On XP it is not possible because CancelIo() has to be called from the same thread which started the operation. I think these warnings come from daemon threads used by manager processes. When the manager process exits some background threads may be blocked doing an overlapped read. (It might be possible to wake up blocked threads by setting the event handle returned by _PyOS_SigintEvent(). That might allow the use of non-daemon threads.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19565 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19565] test_multiprocessing_spawn: RuntimeError and assertion error on windows xp buildbot
Richard Oudkerk added the comment: I think the attached patch should fix it. Note that with the patch the RuntimeError can probably only occur on Windows XP. Shall I apply it? -- keywords: +patch Added file: http://bugs.python.org/file32597/dealloc-runtimeerror.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19565 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19565] test_multiprocessing_spawn: RuntimeError and assertion error on windows xp buildbot
Richard Oudkerk added the comment: On 13/11/2013 3:07pm, STINNER Victor wrote: On Vista and later, yes, this is done in the deallocator using CancelIoEx(), although there is still a warning. I don't understand. The warning is emitted because an operating is not done nor cancelled. Why not cancel explicitly active operations in manager.shutdown()? It is not possible? shutdown() will be run in a different thread to the ones which started the overlapped ops, so it cannot stop them using CancelIo(). And anyway, it would mean writing a separate implementation for Windows -- the current manager implementation contains no platform specific code. Originally overlapped IO was not used on Windows. But, to get rid of polling, Antoine opened the can of worms that is overlapped IO:-) ... I think these warnings come from daemon threads used by manager processes. When the manager process exits some background threads may be blocked doing an overlapped read. I don't know overlapped operations. There are not asynchronous? What do you mean by blocked doing an overlapped read? They are asynchronous but the implementation uses a hidden thread pool. If a pool thread tries to read from/write to a buffer that has been deallocated, then we can get a crash. By blocked doing an overlapped read I mean that a daemon thread is waiting for a line like data = conn.recv() to complete. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19565 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15440] multiprocess fails to re-raise exception which has mandatory arguments
Richard Oudkerk added the comment: This was fixed for 3.3 in #1692335. The issue of backporting to 2.7 is discussed in #17296. -- resolution: - duplicate status: open - closed superseder: - Cannot unpickle classes derived from 'Exception' type: crash - behavior ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15440 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5527] multiprocessing won't work with Tkinter (under Linux)
Richard Oudkerk added the comment: So hopefully the bug should disappear entirely in future releases of tcl, but for now you can work around it by building tcl without threads, calling exec in between the fork and any use of tkinter in the child process, or not importing tkinter until after the fork. In 3.4 you can do this by using multiprocessing.set_start_method('spawn') -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5527 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17874] ProcessPoolExecutor in interactive shell doesn't work in Windows
Richard Oudkerk added the comment: Fixed by #11161. -- resolution: - fixed stage: - committed/rejected status: open - closed superseder: - futures.ProcessPoolExecutor hangs ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17874 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16500] Add an 'atfork' module
Richard Oudkerk added the comment: Given PEP 446 (fds are now CLOEXEC by default) I prepared an updated patch where the fork lock is undocumented and subprocess no longer uses the fork lock. (I did not want to encourage the mixing of threads with fork() without exec() by exposing the fork lock just for that case.) But I found that a test for the leaking of fds to a subprocess started with closefds=False was somewhat regularly failing because the creation of CLOEXEC pipe fds is not atomic -- the GIL is not held while calling pipe(). It seems that PEP 446 does not really make the fork lock redundant for processes started using fork+exec. So now I don't know whether the fork lock should be made public. Thoughts? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16500 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16500] Add an 'atfork' module
Richard Oudkerk added the comment: It is a recent kernel and does support pipe2(). After some debugging it appears that a pipe handle created in Popen.__init__() was being leaked to a forked process, preventing Popen.__init__() from completing before the forked process did. Previously the test passed because Popen.__init__() acquired the fork lock. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16500 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19478] Add ability to prefix posix semaphore names created by multiprocessing module
Richard Oudkerk added the comment: Although it is undocumented, in python 3.4 you can control the prefix used by doing multiprocessing.current_process()._config['semprefix'] = 'myprefix' in the main process at the beginning of the program. Unfortunately, this will make the full prefix '/myprefix', so it will still start with '/'. Changing this for 3.4 would be easy, but I don't know if it is a good idea to change 2.7. Note that your suggested change can cause a buffer overflow. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19478 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19432] test_multiprocessing_fork failures
Richard Oudkerk added the comment: This is a test of threading.Barrier rather than anything implemented directly by multiprocessing. Tests which involve timeouts tend to be a bit flaky. Increasing the length of timeouts usually helps, but makes the tests take even longer. How often have you seen this failure? Did it happen on a buildbot? Was there a lot of other activity on the system at the time? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19432 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19425] multiprocessing.Pool.map hangs if pickling argument raises an exception
Changes by Richard Oudkerk shibt...@gmail.com: -- resolution: - fixed stage: needs patch - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19425 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19227] test_multiprocessing_xxx hangs under Gentoo buildbots
Richard Oudkerk added the comment: Won't using a prepare handler mean that the parent and child processes will use the same seed until one or other of them forks again? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19227 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19293] test_asyncio hanging for 1 hour
Richard Oudkerk added the comment: Would it make sense to use socketpair() instead of pipe() on AIX? We could check for the bug directly rather than checking specifically for AIX. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19293 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16175] Add I/O Completion Ports wrapper
Richard Oudkerk added the comment: Is this patch still of relevance for asyncio? No, the _overlapped extension contains the IOCP stuff. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16175 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16500] Add an 'atfork' module
Richard Oudkerk added the comment: Richard, do you have time to get your patch ready for 3.4? Yes. But we don't seem to have concensus on how to handle exceptions. The main question is whether a failed prepare callback should prevent the fork from happenning, or just be printed. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16500 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16500] Add an 'atfork' module
Richard Oudkerk added the comment: - now that FDs are non-inheritable by default, fork locks around subprocess and multiprocessing shouldn't be necessary anymore? What other use cases does the fork-lock have? CLOEXEC fds will still be inherited by forked children. - the current implementation keeps hard-references to the functions passed: so if one isn't careful, you can end up easily with a lot of objects kept alive just because of those references, which can be a problem True, but you could make the same complaint about atexit.register(). One can fairly easily create something like weakref.finalize which uses atfork but is smart about not creating hard refs. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16500 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19293] test_asyncio hanging for 1 hour (AIX version, hangs in test_subprocess_interactive)
Richard Oudkerk added the comment: The following uses socketpair() instead of pipe() for stdin, and works for me on Linux: diff -r 7d94e4a68b91 asyncio/unix_events.py --- a/asyncio/unix_events.pySun Oct 20 20:25:04 2013 -0700 +++ b/asyncio/unix_events.pyMon Oct 21 17:15:19 2013 +0100 @@ -272,8 +272,6 @@ self._loop = loop self._pipe = pipe self._fileno = pipe.fileno() -if not stat.S_ISFIFO(os.fstat(self._fileno).st_mode): -raise ValueError(Pipe transport is for pipes only.) _set_nonblocking(self._fileno) self._protocol = protocol self._buffer = [] @@ -442,9 +440,16 @@ self._finished = False self._returncode = None +if stdin == subprocess.PIPE: +stdin_w, stdin_r = socket.socketpair() +else: +stdin_w = stdin_r = None self._proc = subprocess.Popen( -args, shell=shell, stdin=stdin, stdout=stdout, stderr=stderr, +args, shell=shell, stdin=stdin_r, stdout=stdout, stderr=stderr, universal_newlines=False, bufsize=bufsize, **kwargs) +if stdin_r is not None: +stdin_r.close() +self._proc.stdin = open(stdin_w.detach(), 'rb', buffering=bufsize) self._extra['subprocess'] = self._proc def close(self): -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19293 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19293] test_asyncio hanging for 1 hour
Richard Oudkerk added the comment: I guess we'll have to write platform-dependent code and make this an optional feature. (Essentially, on platforms like AIX, for a write-pipe, connection_lost() won't be called unless you try to write some more bytes to it.) If we are not capturing stdout/stderr then we could leak the write end of a pipe to the child. When the read end becomes readable we can call the process protocol's connection_lost(). Or we could just call connection_lost() when reaping the pid. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19293 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18999] Support different contexts in multiprocessing
Richard Oudkerk added the comment: I guess this should be clarified in the docs, but multiprocessing.pool.Pool is a *class* whose constructor takes a context argument, where as multiprocessing.Pool() is a *bound method* of the default context. (In previous versions multiprocessing.Pool was a *function*.) The only reason you might need the context argument is if you have subclassed multiprocessing.pool.Pool. from multiprocessing import pool, get_context forkserver = get_context('forkserver') p = forkserver.Pool() q = pool.Pool(context=forkserver) p, q (multiprocessing.pool.Pool object at 0xb71f3eec, multiprocessing.pool.Pool object at 0xb6edb06c) I suppose we could just make the bound methods accept a context argument which (if not None) is used instead of self. -- status: closed - open ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18999 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10015] Creating a multiprocess.pool.ThreadPool from a child thread blows up.
Changes by Richard Oudkerk shibt...@gmail.com: -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10015 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19227] test_multiprocessing_xxx hangs under Gentoo buildbots
Richard Oudkerk added the comment: I can reproduce the problem on the Non-Debug Gentoo buildbot using only os.fork() and os.kill(pid, signal.SIGTERM). See http://hg.python.org/cpython/file/9853d3a20849/Lib/test/_test_multiprocessing.py#l339 To investigate further I think strace and/or gdb will need to be installed on that box. P.S. Note that the Debug Gentoo buildbot is always failing at the configure stage with No space left on device. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19227 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19227] test_multiprocessing_xxx hangs under Gentoo buildbots
Richard Oudkerk added the comment: I fixed the out of space last night. (Someday I'll get around to figuring out which test it is that is leaving a bunch of data around when it fails, but I haven't yet). It looks like on the Debug Gentoo buildbot configure and clean are failing. http://buildbot.python.org/all/builders/x86%20Gentoo%203.x/builds/5090/steps/configure/logs/stdio http://buildbot.python.org/all/builders/x86%20Gentoo%203.x/builds/5090/steps/clean/logs/stdio I've installed strace and gdb on the bots, please send me your public key and I'll set up an ssh login for you. Thanks. For now I will just try starting gdb using subprocess on the custom buildbot. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19227 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19227] test_multiprocessing_xxx hangs under Gentoo buildbots
Richard Oudkerk added the comment: I finally have a gdb backtrace of a stuck child (started using os.fork() not multiprocessing): #1 0xb76194da in ?? () from /lib/libc.so.6 #2 0xb6d59755 in ?? () from /var/lib/buildslave/custom.murray-gentoo/build/build/lib.linux-i686-3.4-pydebug/_ssl.cpython-34dm.so #3 0xb6d628f0 in _fini () from /var/lib/buildslave/custom.murray-gentoo/build/build/lib.linux-i686-3.4-pydebug/_ssl.cpython-34dm.so #4 0xb770859b in ?? () from /lib/ld-linux.so.2 #5 0xb75502c7 in ?? () from /lib/libc.so.6 #6 0xb7550330 in exit () from /lib/libc.so.6 #7 0xb558f244 in ?? () from /lib/libncursesw.so.5 #8 0xb76e9f38 in fork () from /lib/libpthread.so.0 ---Type return to continue, or q return to quit---#9 0x08085f89 in posix_fork (self=0xb74da374, noargs=0x0) at ./Modules/posixmodule.c:5315 ... It looks as though fork() is indirectly calling something in _ssl.cpython-34dm.so which is not completing. So I guess this is pthread_atfork() related. But the child argument passed to pthread_atfork() should be NULL, so I don't really understand this: static int PySSL_RAND_atfork(void) { static int registered = 0; int retval; if (registered) return 0; retval = pthread_atfork(NULL, /* prepare */ PySSL_RAND_atfork_parent, /* parent */ NULL);/* child */ if (retval != 0) { PyErr_SetFromErrno(PyExc_OSError); return -1; } registered = 1; return 0; } -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19227 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19227] test_multiprocessing_xxx hangs under Gentoo buildbots
Richard Oudkerk added the comment: Actually, according to strace the call which blocks is futex(0xb7839454, FUTEX_WAIT_PRIVATE, 1, NULL -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19227 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18999] Support different contexts in multiprocessing
Changes by Richard Oudkerk shibt...@gmail.com: -- resolution: - fixed stage: - committed/rejected status: open - pending title: Robustness issues in multiprocessing.{get,set}_start_method - Support different contexts in multiprocessing type: behavior - enhancement ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18999 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18999] Support different contexts in multiprocessing
Changes by Richard Oudkerk shibt...@gmail.com: -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18999 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19262] Add asyncio (tulip, PEP 3156) to stdlib
Richard Oudkerk added the comment: On 16/10/2013 8:14pm, Guido van Rossum wrote: (2) I get this message -- what does it mean and should I care? 2 tests altered the execution environment: test_asyncio.test_base_events test_asyncio.test_futures Perhaps threads from the ThreadExecutor are still alive when those tests finish. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19262 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19262] Add asyncio (tulip, PEP 3156) to stdlib
Richard Oudkerk added the comment: I think at module level you can do if sys.platform != 'win32': raise unittest.SkipTest('Windows only') -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19262 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19246] freeing then reallocating lots of memory fails under Windows
Richard Oudkerk added the comment: After running ugly_hack(), trying to malloc a largeish block (1MB) fails: int main(void) { int first; void *ptr; ptr = malloc(1024*1024); assert(ptr != NULL);/* succeeds */ free(ptr); first = ugly_hack(); ptr = malloc(1024*1024); assert(ptr != NULL);/* fails */ free(ptr); return 0; } -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19246 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18999] Robustness issues in multiprocessing.{get, set}_start_method
Richard Oudkerk added the comment: I haven't read all of your patch yet, but does this mean a forkserver will be started regardless of whether it is later used? No, it is started on demand. But since it is started using _posixsbuprocess.fork_exec(), nothing is inherited from the main process. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18999 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12413] make faulthandler dump traceback of child processes
Changes by Richard Oudkerk shibt...@gmail.com: -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12413 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19227] test_multiprocessing_xxx hangs under Gentoo buildbots
Richard Oudkerk added the comment: I'm already confused by the fact that the test is named test_multiprocessing_spawn and the error is coming from a module named popen_fork...) popen_spawn_posix.Popen is a subclass of popen_fork.Popen. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19227 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18999] Robustness issues in multiprocessing.{get, set}_start_method
Richard Oudkerk added the comment: BTW, the context objects are singletons. I could not see a sensible way to make ctx.Process be a picklable class (rather than a method) if there can be multiple instances of a context type. This means that the helper processes survive until the program closes down. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18999 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18999] Robustness issues in multiprocessing.{get, set}_start_method
Richard Oudkerk added the comment: Attached is a patch which allows the use of separate contexts. For example try: ctx = multiprocessing.get_context('forkserver') except ValueError: ctx = multiprocessing.get_context('spawn') q = ctx.Queue() p = ctx.Process(target=foo, args=(q,)) p.start() ... Also, get_start_method(allow_none=True) will return None if the start method has not yet been fixed. -- Added file: http://bugs.python.org/file32034/context.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18999 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19185] Allow multiprocessing Pool initializer to return values
Richard Oudkerk added the comment: These functions are compliant with POSIX standards and the return values are actually useful, they return the previously set masks and handlers, often are ignored but in complex cases it's good to know their previous state. Yes. But my point was that somebody might have used such a function as the initializer argument. The proposed change would break a program which does with Pool(initializer=os.nice, initargs=(incr,)) as p: ... -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19185 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19185] Allow multiprocessing Pool initializer to return values
Richard Oudkerk added the comment: I think misuse is an exageration. Various functions change some state and return a value that is usually ignored, e.g. os.umask(), signal.signal(). Global variables usage is a pattern which might lead to code errors and many developers discourage from following it. What sort of code errors? This really seems a stylistic point. Maybe such developers would be happier using class methods and class variables rather than functions and globals variables. Out of interest, what do you usually do in your initializer functions? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19185 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19185] Allow multiprocessing Pool initializer to return values
Richard Oudkerk added the comment: the previous initializers were not supposed to return any value Previously, any returned value would have been ignored. But the documentation does not say that the function has to return None. So I don't think we can assume there is no compatibility issue. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19185 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19158] BoundedSemaphore.release() subject to races
Richard Oudkerk added the comment: Is BoundedSemaphore really supposed to be robust in the face of too many releases, or does it just provide a sanity check? I think that releasing a bounded semaphore too many times is a programmer error, and the exception is just a debugging aid for the programmer. Raising an exception 99% of the time should be sufficient for that purpose. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19158 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19021] AttributeError in Popen.__del__
Richard Oudkerk added the comment: Well, perhaps we can special-case builtins not to be wiped at shutdown. However, there is another problem here in that the Popen object survives until the builtins module is wiped. This should be investigated too. Maybe it is because it uses the evil resuscitate-in-__del__ trick. I presume that if the child process survives during shutdown, then the popen object is guaranteed to survive too. We could get rid of the trick: * On Windows __del__ is unneeded since we don't need to reap zombie processes. * On Unix __del__ could just add self._pid (rather than self) to the list _active. _cleanup() would then use os.waitpid() to check the pids in _active. The hardest thing about making such a change is that test_subprocess currently uses _active. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19021 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19066] os.execv fails with spaced names on Windows
Richard Oudkerk added the comment: See http://bugs.python.org/issue436259 This is a problem with Window's implementation of spawn*() and exec*(). Just use subprocess instead which gets this stuff right. Note that on Windows exec*() is useless: it just starts a subprocess and exits the current process. You can use subprocess to get the same effect. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19066 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19066] os.execv fails with spaced names on Windows
Richard Oudkerk added the comment: I am not sure that I should see there. There is discussion of DOS, which is not supported, also some complain about Windows execv function, which deprecated since VC++ 2005 (which I hope also not supported). Can you be more specific? _spawn*() and _exec*() are implemented by the C runtime library. spawn*() and execv() are (deprecated) aliases. The the first message is about someone's attempt to work around the problems with embedded spaces and double quotes by writing a function to escape each argument. He says he had a partial success. Surely this is basic reading comprehension? Note that on Windows exec*() is useless: it just starts a subprocess and exits the current process. You can use subprocess to get the same effect. Are you describing Windows implementation of _exec() http://msdn.microsoft.com/en-us/library/431x4c1w.aspx or current Python implementation? The Windows implementaion of _exec(). Just use subprocess instead which gets this stuff right. subprocess doesn't replace os.exec*, see issue19060 On Unix subprocess does not replace os.exec*(). That is because on Unix exec*() replaces the current process with a new process with the *same pid*. subprocess cannot do this. But on Windows os.exec*() just starts an independent process with a *different pid* and exits the current process. The line os.execv(path, args) is equivalent to os.spawnv(os.P_NOWAIT, path, args) os._exit(0) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19066 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19124] os.execv executes in background on Windows
Richard Oudkerk added the comment: As I wrote in http://bugs.python.org/issue19066, on Windows execv() is equivalent to os.spawnv(os.P_NOWAIT, ...) os._exit(0) This means that control is returned to cmd when the child process *starts* (and afterwards you have cmd and the child connected to the same console). On Unix control is returned to the shell only once the child process *ends*. Although it might be less memory efficient, you would actually get something closer to Unix behaviour by replacing os.execv(...) with sts = os.spawnv(os.P_WAIT, ...) _exit(sts) or sts = subprocess.call(...) _exit(sts) This is why I said that execv() is useless on Windows and that you should just use subprocess instead. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19124 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19066] os.execv fails with spaced names on Windows
Richard Oudkerk added the comment: It is said that execv() is deprecated, but it is not said that it is alias of _execv(). It is only said that _execv() is C++ compliant. http://msdn.microsoft.com/en-us/library/ms235416(v=vs.90).aspx Microsoft seems to have decided that all functions in the C runtime which don't begin with an underscore, and are not included in the ANSI C standard should be deprecated. This includes all the fd functions like read(), write(), open(), close(), ... There is no difference in behaviour between these and the underscore versions. ... Don't we have such function already? I don't see the problem in quoting the string. No one seems to know how to write such a quoting function. ... Does it start child process in foreground or in background? Did you compile examples on http://msdn.microsoft.com/en-us/library/431x4c1w.aspx page with new VC++ to check? I don't possess the VC++ 10, so I can't do this myself. And I believe that compiling with GCC may lead to different results. There is no such thing as a background task in Windows. A process is either attached to a console, or it isn't. When you use execv() to start a process, it inherits the parent's console. On Unix try replacing os.execv(...) by os.spawnv(os.P_NOWAIT, ...) os._exit(0) and you will probably get the same behaviour where the shell and the child process both behave as conflicting foreground tasks. .. I don't mind if it runs child process with different pid, but why it runs new process in background. Unix version doesn't do this. The point is that the shell waits for its child process to finish by using waitpid() (or something similar) on the child's pid. If the child uses execv() then the child is replaced by a grandchild process with the same pid. From the point of view of the shell, the child and the grandchild are the same process, and waitpid() will not stop until the grandchild terminates. This issue should be closed: just use subprocess instead. -- resolution: - duplicate stage: test needed - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19066 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19124] os.execv executes in background on Windows
Richard Oudkerk added the comment: Where did you get that info? MSDN is silent about that. http://msdn.microsoft.com/en-us/library/886kc0as(v=vs.90).aspx Reading the source code for the C runtime included with Visual Studio. The problem is not in what I should or should not use. The problem that existing scripts that work on Unix and use os.execv() to launch interactive scripts, on Windows behave absolutely weird and unusable behavior. I previously experienced this with SCons, but couldn't get the reason. Now I experience this with basic Android development tools and dug down to this. It is clearly a big mess from this side of Windows. As said before (more than once), os.exec*() is useless on Windows: just use subprocess. -- resolution: - rejected stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19124 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19066] os.execv fails with spaced names on Windows
Richard Oudkerk added the comment: Hey. This ticket is about os.execv failing on spaced paths on Windows. It is not a duplicate of issue19124. It is a duplicate of #436259 [Windows] exec*/spawn* problem with spaces in args. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19066 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19124] os.execv executes in background on Windows
Richard Oudkerk added the comment: Visual Studio 10+ ? Is it available somewhere for a reference? Old versions of the relevant files are here: http://www.controllogics.com/software/VB6/VC98/CRT/SRC/EXECVE.C http://www.controllogics.com/software/VB6/VC98/CRT/SRC/SPAWNVE.C http://www.controllogics.com/software/VB6/VC98/CRT/SRC/DOSPAWN.C -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19124 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com