[issue11618] Locks broken wrt timeouts on Windows

2011-03-20 Thread sbt
New submission from sbt shibt...@gmail.com: In thread_nt.h, when the WaitForSingleObject() call in EnterNonRecursiveMutex() fails with WAIT_TIMEOUT (or WAIT_FAILED) the mutex is left in an inconsistent state. Note that the first line of EnterNonRecursiveMutex() is the comment /* Assume

[issue11618] Locks broken wrt timeouts on Windows

2011-03-20 Thread sbt
sbt shibt...@gmail.com added the comment: First stab at a fix. Gets rid of mutex-thread_id and adds a mutex-timeouts counter. Does not try to prevent mutex-owned from overflowing. When no timeouts have occurred I don't think it changes behaviour, and it uses the same number of Interlocked

[issue11618] Locks broken wrt timeouts on Windows

2011-03-20 Thread sbt
sbt shibt...@gmail.com added the comment: Have you tried benchmarking it? Interlocked functions are *much* faster than Win32 mutex/semaphores in the uncontended case. It only doubles the time taken for a l.acquire(); l.release() loop in Python code, but at the C level it is probably 10

[issue11618] Locks broken wrt timeouts on Windows

2011-03-21 Thread sbt
sbt shibt...@gmail.com added the comment: If we are rolling our own instead of using Semaphores (as has been suggested for performance reasons) then using a Condition variable is IMHO safer than a custom solution because the correctness of that approach is so easily provable. Assuming

[issue11618] Locks broken wrt timeouts on Windows

2011-03-21 Thread sbt
sbt shibt...@gmail.com added the comment: Benchmarks (on an old laptop running XP without a VM) doing D:\Repos\cpython\PCbuildpython -m timeit -s from threading import Lock; l = Lock() l.acquire(); l.release() 100 loops, best of 3: 0.934 usec per loop default:0.934

[issue11618] Locks broken wrt timeouts on Windows

2011-03-21 Thread sbt
sbt shibt...@gmail.com added the comment: Btw, the locktimeout.patch appears to have a race condition. LeaveNonRecursiveMutex may SetEvent when there is no thread waiting (because a timeout just occurred, but the thread on which it happened is still somewhere around line #62

[issue11618] Locks broken wrt timeouts on Windows

2011-03-21 Thread sbt
sbt shibt...@gmail.com added the comment: sbt wrote: - I see your point. Still, I think we still may have a flaw: The statement that (owned-timeouts) is never an under-estimate isn't true on modern architectures, I think. The order of the atomic decrement operations in the code means

[issue11618] Locks broken wrt timeouts on Windows

2011-03-21 Thread sbt
sbt shibt...@gmail.com added the comment: krisvale wrote There is no barrier in use on the read part. I realize that this is a subtle point, but in fact, the atomic functions make no memory barrier guarantees either (I think). And even if they did, you are not using a memory barrier

[issue11618] Locks broken wrt timeouts on Windows

2011-03-22 Thread sbt
sbt shibt...@gmail.com added the comment: krisvale wrote: So, I suggest a change in the comments: Do not claim that the value is never an underestimate, and explain how falsely returning a WAIT_TIMEOUT is safe and only occurs when the lock is heavily contented. Sorry for being so

[issue11618] Locks broken wrt timeouts on Windows

2011-03-22 Thread sbt
Changes by sbt shibt...@gmail.com: Removed file: http://bugs.python.org/file21335/locktimeout3.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11618

[issue11618] Locks broken wrt timeouts on Windows

2011-03-22 Thread sbt
sbt shibt...@gmail.com added the comment: krisvale wrote: So, I suggest a change in the comments: Do not claim that the value is never an underestimate, and explain how falsely returning a WAIT_TIMEOUT is safe and only occurs when the lock is heavily contented. Sorry for being so

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-02 Thread sbt
New submission from sbt shibt...@gmail.com: According to the the documentation, BufferedReader.read() and BufferedWriter.write() should raise io.BlockingIOError if the file is in non-blocking mode and the operation cannot succeed without blocking. However, BufferedReader.read() returns None

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-02 Thread sbt
Changes by sbt shibt...@gmail.com: -- type: - behavior versions: +Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13322

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-02 Thread sbt
sbt shibt...@gmail.com added the comment: BufferedReader.readinto() should also raise BlockingIOError according to the docs. Updated unittest checks for that also. BTW, The documentation for BufferedIOBase.read() says that BlockingIOError should be raised if nothing can be read in non

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-03 Thread sbt
sbt shibt...@gmail.com added the comment: Wierdly, it looks like BlockingIO is not raised anywhere in the code for the C implementation of io. Even more wierdly, in the Python implementation of io, BlockingIOError is only ever raised by except clauses which have already caught BlockingIOError

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-04 Thread sbt
sbt shibt...@gmail.com added the comment: No one has suggested raising BlockingIOError and DISCARDING the data when a partial read has occurred. The docs seem to imply that the partially read data should be returned since they only say that BlockingIOError should be raised

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-04 Thread sbt
sbt shibt...@gmail.com added the comment: But what about the buggy readline() behaviour? Just tell people that if the return value is a string which does not end in '\n' then it might caused by EOF or EAGAIN. They can just call readline() again to check which

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-04 Thread sbt
sbt shibt...@gmail.com added the comment: The third arg of BlockingIOError is used in two quite different ways. In write(s) it indicates the number of bytes of s which have been consumed (ie written to the raw file or buffered). But in flush() and flush_unlocked() (in _pyio) it indicates

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-04 Thread sbt
sbt shibt...@gmail.com added the comment: Currently a BlockingIOError exception raised by flush() sets characters_written to the number of bytes fushed from the internal buffer. This is undocument (although there is a unit test which tests for it) and causes confusion because characters_written

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-04 Thread sbt
sbt shibt...@gmail.com added the comment: Another possibility would be that, since lines are usually reasonably sized, they should fit in the buffer (which is 8KB by default). So we could do the extra effort of buffering the data and return it once the line is complete: if the buffer fills

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-04 Thread sbt
sbt shibt...@gmail.com added the comment: The attached patch makes BufferedWrite.write() raise BlockingIOError when the raw file is non-blocking and the write would block. -- keywords: +patch Added file: http://bugs.python.org/file23613/write_blockingioerror.patch

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-07 Thread sbt
sbt shibt...@gmail.com added the comment: Testing the patch a bit more thoroughly, I found that data received from the readable end of the pipe can be corrupted by the C implementation. This seems to be because two of the previously dormant codepaths did not properly maintain the necessary

[issue13374] Deprecate usage of the Windows ANSI API in the nt module

2011-11-09 Thread sbt
sbt shibt...@gmail.com added the comment: Functions like os.execv() or os.readlink() are not deprecated because the underlying C function really uses a bytes API (execv and readlink). Probably os.execv() should be implemented on Windows with _wexecv() instead of _execv(). Likewise

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-10 Thread sbt
sbt shibt...@gmail.com added the comment: Ouch. Were they only non-blocking codepaths? Yes. raw_pos is the position which the underlying raw stream is currently at. It only needs to be modified when a successful write(), read() or seek() is done on the raw stream. Do you mean self

[issue13374] Deprecate usage of the Windows ANSI API in the nt module

2011-11-12 Thread sbt
sbt shibt...@gmail.com added the comment: I notice that the patch changes rename() and link() to use win32_decode_filename() to coerce the filename to unicode before using the wide win32 api. (Previously, rename() first tried the wide api, falling back to narrow if that failed; link() used wide

[issue11836] multiprocessing.queues.SimpleQueue is undocumented

2011-11-14 Thread sbt
sbt shibt...@gmail.com added the comment: Well, the sentinels argument, right now, is meant to be used internally. I don't think it's a good thing to document it, since I don't think it's a very clean API (I know, I introduced it :-)) Wouldn't a better alternative be to have a wait

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-15 Thread sbt
sbt shibt...@gmail.com added the comment: Here is an updated patch which uses the real errno. It also gets rid of the restore_pos argument of _bufferedwriter_flush_unlocked() which is always set to false -- I guess buffered_flush_and_rewind_unlocked() is used instead. -- Added file

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-18 Thread sbt
sbt shibt...@gmail.com added the comment: Thanks again. Just a nit: the tests should be in MiscIOTest, since they don't directly instantiate the individual classes. Also, perhaps it would be nice to check that the exception's errno attribute is EAGAIN. Done. -- Added file: http

[issue13322] buffered read() and write() does not raise BlockingIOError

2011-11-18 Thread sbt
sbt shibt...@gmail.com added the comment: Thanks. Who should I credit? sbt? Yeah, thanks. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13322

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-11-20 Thread sbt
sbt shibt...@gmail.com added the comment: Here is an updated patch (pipe_poll_fix.patch) which should be applied on top of sigint_event.patch. It fixes the problems with PipeConnection.poll() and Queue.empty() and makes PipeListener.accept() use overlapped I/O. This should make all the pipe

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-11-23 Thread sbt
sbt shibt...@gmail.com added the comment: I have the feeling that if we have to call GetLastError() at the Python level, then there's something wrong with the APIs we're exposing from the C extension. I see you check for ERROR_OPERATION_ABORTED. Is there any situation where this can

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-11-23 Thread sbt
sbt shibt...@gmail.com added the comment: It seems to me that ERROR_OPERATION_ABORTED is a true error, and so should raise an exception. I guess so, although we do expect it whenever poll() times out. What exception would be appropriate? BlockingIOError? TimeoutError

[issue13448] PEP 3155 implementation

2011-11-24 Thread sbt
sbt shibt...@gmail.com added the comment: Is it intended that pickle will use __qualname__? -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13448

[issue13448] PEP 3155 implementation

2011-11-25 Thread sbt
sbt shibt...@gmail.com added the comment: There are some callables which are missing __qualname__: method_descriptor wrapper_descriptor builtin_function_or_method For the descriptors, at least, obj.__qualname__ should be equivalent to obj.__objclass__.__qualname__ + '.' + obj.__name__

[issue13448] PEP 3155 implementation

2011-11-25 Thread sbt
sbt shibt...@gmail.com added the comment: For builtin_function_or_method it seems obj.__qualname__ should be obj.__self__.__qualname__ + '.' + obj.__name__ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13448

[issue13520] Patch to make pickle aware of __qualname__

2011-12-02 Thread sbt
New submission from sbt shibt...@gmail.com: The attached patch makes pickle use an object's __qualname__ attribute if __name__ does not work. This makes nested classes, unbound instance methods and static methods picklable (assuming that __module__ and __qualname__ give the correct address

[issue13505] Bytes objects pickled in 3.x with protocol =2 are unpickled incorrectly in 2.x

2011-12-06 Thread sbt
sbt shibt...@gmail.com added the comment: One *dirty* trick I am thinking about would be to use something like array.tostring() to construct the byte string. array('B', ...) objects are pickled using two bytes per character, so there would be no advantage: pickle.dumps(array.array('B

[issue13520] Patch to make pickle aware of __qualname__

2011-12-06 Thread sbt
sbt shibt...@gmail.com added the comment: It looks like Issue 3657 is really about builtin methods (i.e. builtin_function_or_method objects where __self__ is not a module). It causes no problem for normal python instance methods. If we tried the getattr approach for builtin methods too

[issue13566] Array objects pickled in 3.x with protocol =2 are unpickled incorrectly in 2.x

2011-12-09 Thread sbt
New submission from sbt shibt...@gmail.com: If you pickle an array object on python 3 the typecode is encoded as a unicode string rather than as a byte string. This makes python 2 reject the pickle. # Python 3.3.0a0 (default, Dec 8 2011, 17:56:13

[issue13505] Bytes objects pickled in 3.x with protocol =2 are unpickled incorrectly in 2.x

2011-12-09 Thread sbt
sbt shibt...@gmail.com added the comment: sbt, the bug is not that the encoding is inefficient. The problem is we cannot unpickle bytes streams from Python 3 using Python 2. Ah. Well you can do it using codecs.encode. Python 3.3.0a0 (default, Dec 8 2011, 17:56:13) [MSC v.1500 32 bit

[issue13566] Array objects pickled in 3.x with protocol =2 are unpickled incorrectly in 2.x

2011-12-09 Thread sbt
sbt shibt...@gmail.com added the comment: I suggest that array.array be changed in Python 2 to allow unicode strings as a typecode or that pickle detects array.array being called and fixes the call. Interestingly, py3 does understand arrays pickled by py2. This appears to be because py2

[issue13577] __qualname__ is not present on builtin methods and functions

2011-12-11 Thread sbt
sbt shibt...@gmail.com added the comment: I already have a patch for the descriptor types which lazily calculates the __qualname__. However test.test_sys also needs fixing because it tests that these types have expected sizes. I have not got round to builtin_function_or_method though

[issue13577] __qualname__ is not present on builtin methods and functions

2011-12-11 Thread sbt
sbt shibt...@gmail.com added the comment: Updated patch which fixes test.test_sys.SizeofTest. (It also adds __qualname__ to member descriptors and getset descriptors.) -- Added file: http://bugs.python.org/file23914/descr_qualname.patch ___ Python

[issue13505] Bytes objects pickled in 3.x with protocol =2 are unpickled incorrectly in 2.x

2011-12-11 Thread sbt
sbt shibt...@gmail.com added the comment: I don't really know that much about pickle, but Antoine mentioned that 'bytearray' works fine going from 3.2 to 2.7. Given that, can't we just compose 'bytes' with 'bytearray'? Yes, although it would only work for 2.6 and 2.7. codecs.encode

[issue13577] __qualname__ is not present on builtin methods and functions

2011-12-11 Thread sbt
sbt shibt...@gmail.com added the comment: Note that extension (non-builtin) types will need to have their __qualname__ fixed before their methods' __qualname__ is usable: collections.deque.__qualname__ 'deque' I'm confused. Isn't that the expected behaviour? Since the deque class

[issue13577] __qualname__ is not present on builtin methods and functions

2011-12-11 Thread sbt
sbt shibt...@gmail.com added the comment: New version of the patch with tests and using _Py_IDENTIFIER. -- Added file: http://bugs.python.org/file23922/descr_qualname.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13577

[issue13577] __qualname__ is not present on builtin methods and functions

2011-12-12 Thread sbt
sbt shibt...@gmail.com added the comment: Patch which add __qualname__ to builtin_function_or_method. Note that I had to make a builtin staticmethod have __self__ be the type instead of None. -- Added file: http://bugs.python.org/file23926/method_qualname.patch

[issue13577] __qualname__ is not present on builtin methods and functions

2011-12-12 Thread sbt
sbt shibt...@gmail.com added the comment: Ok, a couple of further (minor) issues: - I don't think AssertionError is the right exception type. TypeError should be used when a type mismatches (e.g. not an unicode object); - you don't need to check for d_type being NULL, since other methods

[issue13577] __qualname__ is not present on builtin methods and functions

2011-12-12 Thread sbt
sbt shibt...@gmail.com added the comment: - apparently you forgot to add BuiltinFunctionPropertiesTest in test_main()? Yes. Fixed. - a static method keeps a reference to the type: I think it's ok, although I'm not sure about the consequences (Guido, would you have an idea

[issue13505] Bytes objects pickled in 3.x with protocol =2 are unpickled incorrectly in 2.x

2011-12-12 Thread sbt
sbt shibt...@gmail.com added the comment: I now realise latin_1_encode won't work because it returns a pair (bytes_obj, length). I have done a patch using _codecs.encode instead -- the pickles turn out to be exactly the same size anyway. pickletools.dis(pickle.dumps(babc, 2)) 0: \x80

[issue13505] Bytes objects pickled in 3.x with protocol =2 are unpickled incorrectly in 2.x

2011-12-12 Thread sbt
sbt shibt...@gmail.com added the comment: Which is fine. 'bytes' and byte literals were not introduced until 2.6 [1,2]. So *any* solution we come up with is for = 2.6. In 2.6 and 2.7, bytes is just an alias for str. In all 2.x versions with codecs.encode, the result will be str

[issue13577] __qualname__ is not present on builtin methods and functions

2011-12-13 Thread sbt
sbt shibt...@gmail.com added the comment: sbt, have you been running the test suite before submitting patches? If not, then please do. I ran it after I submitted. Sorry. Here is another patch. It also makes sure that __self__ is reported as None when METH_STATIC. -- Added file

[issue8713] multiprocessing needs option to eschew fork() under Linux

2011-12-21 Thread sbt
sbt shibt...@gmail.com added the comment: I think this is indeed useful, but I'm tempted to go further and say we should make this the default - and only - behavior. This will probably break existing code that accidentaly relied the fact that the implementation uses a bare fork(), but i'd

[issue13577] __qualname__ is not present on builtin methods and functions

2011-12-21 Thread sbt
sbt shibt...@gmail.com added the comment: A simplified patch getting rid of _PyCFunction_GET_RAW_SELF(). -- Added file: http://bugs.python.org/file24068/method_qualname.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13577

[issue13673] SIGINT prevents raising of exceptions unless PyErr_CheckSignals() called

2011-12-28 Thread sbt
New submission from sbt shibt...@gmail.com: If SIGINT arrives while a function implemented in C is executing, then it prevents the function from raising an exception unless the function first calls PyErr_CheckSignals(). (If the function returns an object (instead of NULL

[issue13673] SIGINT prevents raising of exceptions unless PyErr_CheckSignals() called

2011-12-29 Thread sbt
sbt shibt...@gmail.com added the comment: I have tried the same with Python 2.7.1 on Linux. The problem is the same, but one gets a partial traceback with no exception: import sys, testsigint testsigint.wait() ^CTraceback (most recent call last): File stdin, line 1, in module

[issue13673] PyTraceBack_Print() fails if signal received but PyErr_CheckSignals() not called

2011-12-29 Thread sbt
sbt shibt...@gmail.com added the comment: Attached is a patch for the default branch. Before calling PyFile_WriteString() the patch saves the current exception. Then it calls PyErr_CheckSignals() and clears the current exception if any. After calling PyFile_WriteString() the exception

[issue13673] PyTraceBack_Print() fails if signal received but PyErr_CheckSignals() not called

2011-12-29 Thread sbt
sbt shibt...@gmail.com added the comment: I think I have found the problem. PyTraceBack_Print() calls PyFile_WriteString(), which calls PyFile_WriteObject(), which calls PyObject_Str() which begins with PyObject_Str(PyObject *v) { PyObject *res; if (PyErr_CheckSignals

[issue13673] PyTraceBack_Print() fails if signal received but PyErr_CheckSignals() not called

2011-12-29 Thread sbt
sbt shibt...@gmail.com added the comment: I think calling PyErr_WriteUnraisable would be more appropriate than PyErr_Clear. You mean just adding PyErr_CheckSignals(); if (PyErr_Occurred()) PyErr_WriteUnraisable(NULL); before the call to PyFile_WriteString()? That seems

[issue13673] PyTraceBack_Print() fails if signal received but PyErr_CheckSignals() not called

2012-01-08 Thread sbt
sbt shibt...@gmail.com added the comment: Trivial 3 lines patch. I guess there is still a race: if Ctrl-C is pressed after PyErr_CheckSignals() is called but before PyObject_Str() then the printing of any exception can still be suppressed. -- Added file: http://bugs.python.org

[issue13751] multiprocessing.pool hangs if any worker raises an Exception whose constructor requires a parameter

2012-01-09 Thread sbt
sbt shibt...@gmail.com added the comment: This is not specific to multiprocessing. It is really an issue with the pickling of exceptions: import cPickle class BadExc(Exception): ... def __init__(self, a): ... '''Non-optional param in the constructor

[issue8713] multiprocessing needs option to eschew fork() under Linux

2012-01-23 Thread sbt
sbt shibt...@gmail.com added the comment: Attached is an updated version of the mp_fork_exec.patch. This one is able to reliably clean up any unlinked semaphores if the program exits abnormally. -- Added file: http://bugs.python.org/file24297/mp_fork_exec.patch

[issue8713] multiprocessing needs option to eschew fork() under Linux

2012-01-23 Thread sbt
sbt shibt...@gmail.com added the comment: mp_split_tests.patch splits up the test_multiprocessing.py: test_multiprocessing_misc.py miscellaneous tests which need not be run with multiple configurations mp_common.py testcases which should be run with multiple configurations

[issue6721] Locks in python standard library should be sanitized on fork

2012-01-23 Thread sbt
sbt shibt...@gmail.com added the comment: Attached is a patch (without documentation) which creates an atfork module for Unix. Apart from the atfork() function modelled on pthread_atfork() there is also a get_fork_lock() function. This returns a recursive lock which is held whenever a child

[issue6721] Locks in python standard library should be sanitized on fork

2012-01-23 Thread sbt
sbt shibt...@gmail.com added the comment: Is there any particular reason not to merge Charles-François's reinit_locks.diff? Reinitialising all locks to unlocked after a fork seems the only sane option. -- ___ Python tracker rep...@bugs.python.org

[issue13841] multiprocessing should use sys.exit() where possible

2012-01-24 Thread sbt
sbt shibt...@gmail.com added the comment: Currently, on both Windows and Unix, when the main thread of a child process exits: * atexit callbacks are NOT run (although multiprocessing.util._exit_function IS run), * the main thread does NOT wait for non-daemonic background threads. A simple

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2012-01-27 Thread sbt
sbt shibt...@gmail.com added the comment: Quite honestly I don't like the way that polling a pipe reads a partial message from the pipe. If at all possible, polling should not modify the pipe. I think the cleanest thing would be to switch to byte oriented pipes on Windows and create PipeIO

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2012-02-01 Thread sbt
sbt shibt...@gmail.com added the comment: I have done an updated patch. (It does *not* switch to using bytes oriented pipes as I suggested in the previous message.) The patch also adds a wait() function with signature wait(object_list, timeout=None) for polling multiple objects at once

[issue12262] Not Inheriting File Descriptors on Windows?

2011-06-04 Thread sbt
sbt shibt...@gmail.com added the comment: Although Windows fds are not inheritable, the handles associated with fds can be made inheritable. A workaround for the fact fds are not inheritable is the following pattern: 1) The parent process converts the fd to a handle using _get_osfhandle(fd

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-06-13 Thread sbt
New submission from sbt shibt...@gmail.com: There are some problems with the new Windows overlapped implementation of PipeConnection in the default branch. 1) poll(0) can return False when an empty string is in the pipe: if the next message in the pipe is b then PeekNamedPipe() returns (0

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-06-13 Thread sbt
sbt shibt...@gmail.com added the comment: The attached patch hopefully fixes problems (1)-(5), but I have never used overlapped I/O before. test_pipe_poll.py passes with these changes. -- keywords: +patch Added file: http://bugs.python.org/file22350/pipe_poll.patch

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-06-14 Thread sbt
sbt shibt...@gmail.com added the comment: Also, what is the rationale for the following change: -elif timeout == 0.0: +elif timeout == 0.0 and nleft != 0: return False If PeekNamedPipe() returns (navail, nleft) there are 3 cases: 1) navail 0

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-06-15 Thread sbt
sbt shibt...@gmail.com added the comment: pipe_interruptible.patch is a patch to support to making poll() interruptible. It applies on top of pipe_poll_2.patch. I am not sure what the guarantees are for when KeyboardInterrupt will be raised. I would have done it a bit differently if I knew

[issue12338] multiprocessing.util._eintr_retry doen't recalculate timeouts

2011-06-15 Thread sbt
New submission from sbt shibt...@gmail.com: multiprocessing.util._eintr_retry is only used to wrap select.select, but it fails to recalculate timeouts. Also, it will never retry the function it wraps because of a missing import errno. I think it would be better to just implement the retrying

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-06-16 Thread sbt
sbt shibt...@gmail.com added the comment: Hmm, it seems to me that it should be done in _poll() instead. Otherwise, recv() will not be interruptible, will it? Or maybe WaitForMultipleObjects() should be changed to also wait on sigint_event if called by the main thread. Also, after looking

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-06-17 Thread sbt
sbt shibt...@gmail.com added the comment: You are right, we need a manual reset *or* we must ensure that every user of _PyOS_SigintEvent only does so from the main thread. On second thoughts, even using an auto-reset event, resetting the event before waiting is unavoidable. Otherwise you

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-06-26 Thread sbt
sbt shibt...@gmail.com added the comment: sigint_event.patch is a patch to make _multiprocessing.win32.WaitForMultipleObjects interruptible. It applies directly on to default. The patch also adds functions _PyOS_SigintEvent and _PyOS_IsMainThread which are implemented in signalmodule.c

[issue12328] multiprocessing's overlapped PipeConnection on Windows

2011-06-26 Thread sbt
sbt shibt...@gmail.com added the comment: I have noticed a few more problems. * Because poll() isn't thread safe on Windows, neither is Queue.empty(). Since a queue's pipe will never contain empty messages, this can be fixed easily by using (a wrapper for) win32.PeekNamedPipe

[issue8323] buffer objects are picklable but result is not unpicklable

2011-08-29 Thread sbt
sbt shibt...@gmail.com added the comment: Buffer objects *are* picklable with protocol 2 (but not with earlier protocols). Unfortunately, the result is not unpicklable. This is not a problem with multiprocessing. (buffer seems to inherit __reduce__ and __reduce_ex__ from object.) Python

[issue10886] Unhelpful backtrace for multiprocessing.Queue

2011-08-29 Thread sbt
sbt shibt...@gmail.com added the comment: mp_queue_pickle_in_main_thread.patch (against the default branch) fixes the problem by doing the pickling in Queue.put(). It is version of a patch for Issue 8037 (although I believe the behaviour complained about in Issue 8037 is not an actual bug

[issue8037] multiprocessing.Queue's put() not atomic thread wise

2011-08-29 Thread sbt
sbt shibt...@gmail.com added the comment: Modifying an object which is already on a traditional queue can also change what is received by the other thread (depending on timing). So Queue.Queue's put() is not atomic either. Therefore I do not believe this behaviour is a bug. However

[issue8037] multiprocessing.Queue's put() not atomic thread wise

2011-08-29 Thread sbt
sbt shibt...@gmail.com added the comment: I meant Issue 6721 (Locks in python standard library should be sanitized on fork) not 6271. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8037

[issue6721] Locks in python standard library should be sanitized on fork

2011-08-29 Thread sbt
sbt shibt...@gmail.com added the comment: multiprocessing.util already has register_after_fork() which it uses for cleaning up certain things when a new process (launched by multiprocessing) is starting. This is very similar to the proposed atfork mechanism. Multiprocessing assumes

[issue10886] Unhelpful backtrace for multiprocessing.Queue

2011-08-29 Thread sbt
sbt shibt...@gmail.com added the comment: This shouldn't be a problem in Python 3.3, where the Connection classes are reimplemented in pure Python. What should not be a problem? Changes to the implementation of Connection won't affect whether Queue.put() raises an error immediately

[issue12882] mmap crash on Windows

2011-09-02 Thread sbt
sbt shibt...@gmail.com added the comment: You are not doing anything to stop the file object being garbage collected (and therefore closed) before the mmap is created. Try import os import mmap f = open(Certain File,r+) size = os.path.getsize(Certain File)) data = mmap.mmap(f.fileno(), size

[issue8713] multiprocessing needs option to eschew fork() under Linux

2011-09-13 Thread sbt
sbt shibt...@gmail.com added the comment: Here is a patch which adds the following functions: forking_disable() forking_enable() forking_is_enabled() set_semaphore_prefix() get_semaphore_prefix() To create child processes using fork+exec on Unix, call forking_disable

[issue8713] multiprocessing needs option to eschew fork() under Linux

2011-09-13 Thread sbt
Changes by sbt shibt...@gmail.com: Removed file: http://bugs.python.org/file23141/mp_fork_exec.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8713

[issue8713] multiprocessing needs option to eschew fork() under Linux

2011-09-13 Thread sbt
sbt shibt...@gmail.com added the comment: Small fix to patch. -- Added file: http://bugs.python.org/file23142/mp_fork_exec.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8713

[issue13841] multiprocessing should use sys.exit() where possible

2012-02-08 Thread sbt
sbt shibt...@gmail.com added the comment: I think the patch makes multiprocessing.util._exit_function() run twice in non-main processes because it is registered with atexit, and is also called in Process._bootstrap(). _exit_function() does the following: * terminate active daemon processes

[issue14059] Implement multiprocessing.Barrier

2012-02-20 Thread sbt
sbt shibt...@gmail.com added the comment: Here is an initial implementation. Differences from threading.Barrier: - I have not implemented reset(). - wait() returns 0 or -1. One thread returns 0, the remainder return -1. This is different to threading.Barrier where each of the N threads

[issue14059] Implement multiprocessing.Barrier

2012-02-20 Thread sbt
sbt shibt...@gmail.com added the comment: barrier_tests.py contains minor modifications of the unit tests for threading.Barrier. (The two tests using reset() are commented out.) The implementation passes for me on Linux and Windows. -- Added file: http://bugs.python.org/file24580

[issue14087] multiprocessing.Condition.wait_for missing

2012-02-22 Thread sbt
New submission from sbt shibt...@gmail.com: multiprocessing.Condition is missing a counterpart for the wait_for() method added to threading.Condition in Python 3.2. I will work on a patch. -- components: Library (Lib) messages: 153956 nosy: sbt priority: normal severity: normal status

[issue14059] Implement multiprocessing.Barrier

2012-02-22 Thread sbt
sbt shibt...@gmail.com added the comment: Wouldn't it be simpler with a mp.Condition? Well, it is a fair bit shorter than the implementation in threading.py. But that is not a fair comparison because it does implement reset(). I was trying to avoid using shared memory/ctypes since

[issue14095] type_new() removes __qualname__ from the input dictionary

2012-02-23 Thread sbt
sbt shibt...@gmail.com added the comment: I get a segfault with Python 3.3.0a0 (default:31784350f849, Feb 23 2012, 11:07:41) [GCC 4.5.2] on linux Type help, copyright, credits or license for more information. d = {'__qualname__':'XXX'} T = type('foo', (), d) d Segmentation

[issue14087] multiprocessing.Condition.wait_for missing

2012-02-23 Thread sbt
sbt shibt...@gmail.com added the comment: Patch which just copies the implementation from threading. -- keywords: +patch Added file: http://bugs.python.org/file24611/cond_wait_for.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org

[issue14059] Implement multiprocessing.Barrier

2012-02-23 Thread sbt
sbt shibt...@gmail.com added the comment: Patch which subclasses threading.Barrier. -- keywords: +patch Added file: http://bugs.python.org/file24614/mp_barrier.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14059

[issue14059] Implement multiprocessing.Barrier

2012-02-23 Thread sbt
sbt shibt...@gmail.com added the comment: Forgot to mention, mp_barrier.patch needs to be applied on top of cond_wait_for.patch for Issue14087. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14059

[issue14087] multiprocessing.Condition.wait_for missing

2012-02-24 Thread sbt
sbt shibt...@gmail.com added the comment: Shouldn't the `for` loop be outside the outer `with` block? Yes. In Lib/multiprocessing/managers.py: Is there a good reason why the wait_for() proxy method can't simply be implemented as: return self._callmethod('wait_for', (predicate, timeout

[issue14116] Lock.__enter__() method returns True instead of self

2012-02-24 Thread sbt
New submission from sbt shibt...@gmail.com: The __enter__() methods of Lock, RLock, Semaphore and Condition in threading (and multiprocessing) all return True. This seems to contradict the documentation for the context protocol which says contextmanager.__enter__() Enter the runtime

[issue14116] Lock.__enter__() method returns True instead of self

2012-02-26 Thread sbt
sbt shibt...@gmail.com added the comment: IIUC returning True is not incorrect, only useless. In the stdlib I usually see “with lock:”. Can you tell what is the use case for accessing the condition object inside the context block? Does it apply only to Condition or also to *Lock

  1   2   >