sbt added the comment:
A couple of minor changes based on Antoine's earlier review (which I did not
notice till now).
--
Added file: http://bugs.python.org/file25272/mp_pickle_conn.patch
___
Python tracker
<http://bugs.python.org/i
sbt added the comment:
Up to date patch.
--
Added file: http://bugs.python.org/file25270/mp_pickle_conn.patch
___
Python tracker
<http://bugs.python.org/issue4
sbt added the comment:
Can this issue be reclosed now?
--
___
Python tracker
<http://bugs.python.org/issue14310>
___
___
Python-bugs-list mailing list
Unsub
sbt added the comment:
> Overlapped's naming is still lagging behind :-)
Argh. And a string in winapi_module too.
Yet another patch.
--
Added file: http://bugs.python.org/file25252/winapi_module.patch
___
Python tracker
<http://bugs
sbt added the comment:
s/_win32/_winapi/g
--
Added file: http://bugs.python.org/file25241/winapi_module.patch
___
Python tracker
<http://bugs.python.org/issue11
sbt added the comment:
> How about _windowsapi or _winapi then, to ensure there are no clashes?
I don't have any strong feelings, but I would prefer _winapi.
--
___
Python tracker
<http://bugs.python.org
sbt added the comment:
New patch which calculates endtime outside loop.
--
Added file: http://bugs.python.org/file25240/cond_wait_for.patch
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
> I think the module would be better named _win32, since that's the name
> of the API (like POSIX under Unix).
Changed in new patch.
> Also, it seems there are a couple of naming inconsistencies renaming
> (e.g. the overlapped wrapper is named "
sbt added the comment:
New patch. Compared to the previous one:
* socket functions have been moved from _windows to _multiprocessing
* _windows.vcpoj has been removed (so _windows is part of pythoncore.vcproj)
* no changes to pcbuild.sln needed
* removed reference to 'win32_functions.
sbt added the comment:
> I don't think we need the vcproj file, unless I missed something.
_multiprocessing.win32 currently wraps closesocket(), send() and recv() so it
needs to link against ws2_32.lib.
I don't know how to make _windows link against ws2_32.lib without adding a
sbt added the comment:
Attached is an up to date patch.
* code has been moved to Modules/_windows.c
* DWORD is uniformly treated as unsigned
* _subprocess's handle wrapper type has been removed (although
subprocess.py still uses a Python implemented handle wrapper type)
I'm no
sbt added the comment:
I think there are some issues with the treatment of the DWORD type. (DWORD is
a typedef for unsigned long.)
_subprocess always treats them as signed, whereas _multiprocessing treats them
(correctly) as unsigned. _windows does a mixture: functions from _subprocess
sbt added the comment:
> But what if Finalize is used to cleanup a resource that gets
> duplicated in children, like a file descriptor?
> See e.g. forking.py, line 137 (in Popen.__init__())
> or heap.py, line 244 (BufferWrapper.__init__()).
This was how Finalize objects already ac
sbt added the comment:
Alternative patch which records pid when Finalize object is created. The
callback does nothing if recorded pid does not match os.getpid().
--
Added file: http://bugs.python.org/file25195/mp_finalize_pid.patch
___
Python
sbt added the comment:
Why not just
def time_independent_equals(a, b):
return len(a) == len(b) and sum(x != y for x, y in zip(a, b)) == 0
--
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
> That's a problem indeed. Perhaps we need a global "fork lock" shared
> between subprocess and multiprocessing?
I did an atfork patch which included a (recursive) fork lock. See
http://bugs.python.org/review/6721/show
The patch included chang
sbt added the comment:
The last patch did not work on Unix.
Here is a new version where the reduction functions are automatically
registered, so allow_connection_pickling() is redundant.
--
Added file: http://bugs.python.org/file25181/mp_pickle_conn.patch
sbt added the comment:
Patch to disable gc.
--
keywords: +patch
Added file: http://bugs.python.org/file25180/mp_disable_gc.patch
___
Python tracker
<http://bugs.python.org/issue14
New submission from sbt :
When running test_multiprocessing on Linux I occasionally see a stream of
errors caused by ignored weakref callbacks:
Exception AssertionError: AssertionError() in ignored
These do not cause the unittests to fail.
Finalizers from the parent process are supposed
sbt added the comment:
I think it would be reasonable to add a safe comparison function to hmac.
Its documentation could explain briefly when it would be preferable to "==".
--
___
Python tracker
<http://bugs.python.o
sbt added the comment:
> But connection doesn't depend on reduction, neither does forking.
If registration of (Pipe)Connection is done in reduction then you can't make
(Pipe)Connection picklable *automatically* unless you make connection depend on
reduction (possibly indirectly)
sbt added the comment:
Updated patch which uses ForkingPickler in Connection.send().
Note that connection sharing still has to be enabled using
allow_connection_pickling().
Support could be enabled automatically, but that would introduce more circular
imports which confuse me. It might be
sbt added the comment:
> I think a generic solution must be found for multiprocessing, so I'll
> create a separate issue.
I have submitted a patch for Issue 4892 which makes connection and socket
objects picklable. It uses socket.share() and socket.fromshare()
sbt added the comment:
There is an undocumented function multiprocessing.allow_connection_pickling()
whose docstring claims it allows connection and socket objects to be pickled.
The attached patch fixes the multiprocessing.reduction module so that it works
correctly. This means that
sbt added the comment:
I only looked quickly at the web pages, so I may have misunderstood.
But it sounds like this applies when the attacker gets multiple chances to
guess the digest for a *fixed* message (which was presumably chosen by the
attacker).
That is not the case here because
sbt added the comment:
New patch skips tests if ctypes not available.
--
Added file: http://bugs.python.org/file25155/cond_wait_for.patch
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
> Is there a reason the patch changes close() to win32.CloseHandle()?
This is a Windows only code path so close() is just an alias for
win32.CloseHandle(). It allow removal of the lines
# Late import because of circular import
from multiprocessing.fork
sbt added the comment:
Actually Issue 9753 was causing failures in test_socket.BasicTCPTest and
test_socket.BasicTCPTest2 on at least one Windows XP machine.
--
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
> What is the bug that this fixes? Can you provide a test case?
The bug is using an API in a way that the documentation says is
wrong/unreliable. There does not seem to be a classification for that.
I have never seen a problem caused by using DuplicateHandle() s
Changes by sbt :
Removed file: http://bugs.python.org/file25153/mp_socket_dup.patch
___
Python tracker
<http://bugs.python.org/issue14522>
___
___
Python-bugs-list mailin
sbt added the comment:
> There is a simpler way to do this on Windows. The sending process
> duplicates the handle, and the receiving process duplicates that second
> handle using DuplicateHandle() and the DUPLICATE_CLOSE_SOURCE flag. That
> way no server thread is necessar
New submission from sbt :
In multiprocessing.connection on Windows, socket handles are indirectly
duplicated using DuplicateHandle() instead the WSADuplicateSocket(). According
to Microsoft's documentation this is not supported.
This is easily avoided by using socket.detach() inste
sbt added the comment:
> But ForkingPickler could be used in multiprocessing.connection,
> couldn't it?
I suppose so.
Note that the way a connection handle is transferred between existing processes
is unnecessarily inefficient on Windows. A background server thread (one per
proc
sbt added the comment:
ForkingPickler is only used when creating a child process. The
multiprocessing.reduction module is only really intended for sending stuff to
*pre-existing* processes.
As things stand, after importing multiprocessing.reduction you can do something
like
buf
sbt added the comment:
Jimbofbx wrote:
> def main():
> from multiprocessing import Pipe, reduction
> i, o = Pipe()
> print(i);
> reduced = reduction.reduce_connection(i)
> print(reduced);
> newi = reduced[0](*reduced[1])
> print(ne
sbt added the comment:
> If you look at the patch it isn't (or shouldn't be).
Sorry. I misunderstood when Raymond said "running the iterator to completion".
--
___
Python tracker
<http://
sbt added the comment:
> ... and that pickling things like dict iterators entail running the
> iterator to completion and storing all of the results in a list.
The thing to emphasise here is that pickling an iterator is "destructive":
afterwards the original iterator wil
sbt added the comment:
> If duplication happened early, then there would have to be a way to
> "unduplicate" it in the source process if, say, IPC somehow failed.
> There is currently no api to undo the effects of WSADuplicateSocket().
If this were a normal handle the
sbt added the comment:
> I think this captures the functionality better than "duplicate" or
> duppid() since there is no actual duplication involved until the
> fromshare() function is called.
Are you saying the WSADuplicateSocket() call in share() doesn't duplica
New submission from sbt :
When pickling a function object, if it cannot be saved as a global the C
implementation falls back to using copyreg/__reduce__/__reduce_ex__.
The comment for the changeset which added this fallback claims that it is for
compatibility with the Python implementation
sbt added the comment:
_eintr_retry is currently unused. The attached patch removes it.
If it is retained then we should at least add a warning that it does not
recalculate timeouts.
--
keywords: +patch
Added file: http://bugs.python.org/file24888/mp_remove_eintr_retry.patch
New submission from sbt :
The attached patch reimplements ForkingPickler using the new dispatch_table
attribute.
This allows ForkingPickler to subclass Pickler (implemented in C) instead of
_Pickler (implemented in Python).
--
components: Library (Lib)
files: mp_forking_pickler.patch
sbt added the comment:
Ignore my last message...
--
___
Python tracker
<http://bugs.python.org/issue14308>
___
___
Python-bugs-list mailing list
Unsubscribe:
sbt added the comment:
_DummyThread.__init__() explicitly deletes self._Thread__block:
def __init__(self):
Thread.__init__(self, name=_newname("Dummy-%d"))
# Thread.__block consumes an OS-level locking primitive, which
# can never be used by a _DummyThre
sbt added the comment:
It appears that the 4th argument of the socket constructor is undocumented, so
presumably one is expected to use fromfd() instead.
Maybe you could have a frominfo(info) function (to match fromfd(fd,...)) and a
dupinfo(pid) method.
(It appears that multiprocessing uses
sbt added the comment:
I think
PyAPI_FUNC(PyObject *) _PyIter_GetIter(const char *iter);
has a confusing name for a convenience function which retrieves an attribute
from the builtin module by name.
Not sure what would be better. Maybe _PyIter_GetBuiltin().
--
nosy: +sbt
sbt added the comment:
pitrou wrote:
> Are you sure this is desired? Nowhere can I think of a place in the
> stdlib where we use overlapped I/O on sockets.
multiprocessing.connection.wait() does overlapped zero length reads on sockets.
It's documentation currently claims that it
New submission from sbt :
According to Microsoft's documentation sockets created using socket() have the
overlapped attribute, but sockets created with WSASocket() do not unless you
pass the WSA_FLAG_OVERLAPPED flag. The documentation for WSADuplicateSocket()
says
If the source process
sbt added the comment:
What you were told on IRC was wrong. By default the queue *does* have infinite
size.
When a process puts an item on the queue for the first time, a background
thread is started which is responsible for writing items to the underlying
pipe. This does mean that, on
sbt added the comment:
Updated patch addressing Antoine's comments.
--
Added file: http://bugs.python.org/file24737/pipe_poll_fix.patch
___
Python tracker
<http://bugs.python.org/is
sbt added the comment:
Updated patch against 2822765e48a7.
--
Added file: http://bugs.python.org/file24730/pipe_poll_fix.patch
___
Python tracker
<http://bugs.python.org/issue12
sbt added the comment:
Updated patch with docs.
--
Added file: http://bugs.python.org/file24729/pickle_dispatch.patch
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
> Hmm, I tried to apply the latest patch to the default branch and it
> failed. It also seems the patch was done against a changeset
> (508bc675af63) which doesn't exist in the repo...
I will do an updated patch against a "public" changeset.
sbt added the comment:
> I don't understand the following code:
> ...
> since self.dispatch_table is a property returning
> self._dispatch_table. Did you mean type(self).dispatch_table?
More or less. That code was a botched attempt to match the behaviour of the C
imple
New submission from sbt :
Currently the only documented way to have customised pickling for a type is to
register a reduction function with the global dispatch table managed by the
copyreg module. But such global changes are liable to disrupt other code which
uses pickling.
Multiprocessing
sbt added the comment:
Ah. Forgot the patch.
--
Added file: http://bugs.python.org/file24662/time_strftime_leak.patch
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
The attached patch fixes the time related refleaks.
--
___
Python tracker
<http://bugs.python.org/issue14125>
___
___
Python-bugs-list m
sbt added the comment:
The failures for test_multiprocessing and test_concurrent_futures seem to be
caused by a leak in _multiprocessing.win32.WaitForMultipleObjects().
The attached patch fixes those leaks for me (on a 32 bit build).
--
keywords: +patch
nosy: +sbt
Added file: http
sbt added the comment:
> IIUC returning True is not incorrect, only useless. In the stdlib I
> usually see “with lock:”. Can you tell what is the use case for
> accessing the condition object inside the context block? Does it
> apply only to Condition or also to *Lock and S
New submission from sbt :
The __enter__() methods of Lock, RLock, Semaphore and Condition in threading
(and multiprocessing) all return True. This seems to contradict the
documentation for the context protocol which says
contextmanager.__enter__()
Enter the runtime context and return
sbt added the comment:
> Shouldn't the `for` loop be outside the outer `with` block?
Yes.
> In Lib/multiprocessing/managers.py:
> Is there a good reason why the wait_for() proxy method can't simply be
> implemented as:
> return self._callmethod('wait_for',
sbt added the comment:
Forgot to mention, mp_barrier.patch needs to be applied on top of
cond_wait_for.patch for Issue14087.
--
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
Patch which subclasses threading.Barrier.
--
keywords: +patch
Added file: http://bugs.python.org/file24614/mp_barrier.patch
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
Patch which just copies the implementation from threading.
--
keywords: +patch
Added file: http://bugs.python.org/file24611/cond_wait_for.patch
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
I get a segfault with
Python 3.3.0a0 (default:31784350f849, Feb 23 2012, 11:07:41)
[GCC 4.5.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> d = {'__qualname__':'
sbt added the comment:
> Wouldn't it be simpler with a mp.Condition?
Well, it is a fair bit shorter than the implementation in threading.py. But
that is not a fair comparison because it does implement reset().
I was trying to avoid using shared memory/ctyp
New submission from sbt :
multiprocessing.Condition is missing a counterpart for the wait_for() method
added to threading.Condition in Python 3.2.
I will work on a patch.
--
components: Library (Lib)
messages: 153956
nosy: sbt
priority: normal
severity: normal
status: open
title
sbt added the comment:
barrier_tests.py contains minor modifications of the unit tests for
threading.Barrier. (The two tests using reset() are commented out.)
The implementation passes for me on Linux and Windows.
--
Added file: http://bugs.python.org/file24580/barrier_tests.py
sbt added the comment:
Here is an initial implementation. Differences from threading.Barrier:
- I have not implemented reset().
- wait() returns 0 or -1. One thread returns 0, the remainder return -1.
This is different to threading.Barrier where each of the N threads
returns a unique
sbt added the comment:
I think the patch makes multiprocessing.util._exit_function() run twice in
non-main processes because it is registered with atexit, and is also called in
Process._bootstrap().
_exit_function() does the following:
* terminate active daemon processes;
* join active
sbt added the comment:
I have done an updated patch. (It does *not* switch to using bytes oriented
pipes as I suggested in the previous message.)
The patch also adds a wait() function with signature
wait(object_list, timeout=None)
for polling multiple objects at once. On Unix it is
sbt added the comment:
Quite honestly I don't like the way that polling a pipe reads a partial message
from the pipe. If at all possible, polling should not modify the pipe.
I think the cleanest thing would be to switch to byte oriented pipes on Windows
and create PipeIO which subcl
sbt added the comment:
Currently, on both Windows and Unix, when the main thread of a child process
exits:
* atexit callbacks are NOT run (although multiprocessing.util._exit_function IS
run),
* the main thread does NOT wait for non-daemonic background threads.
A simple replacement of
sbt added the comment:
Is there any particular reason not to merge Charles-François's
reinit_locks.diff?
Reinitialising all locks to unlocked after a fork seems the only sane option.
--
___
Python tracker
<http://bugs.python.org/i
sbt added the comment:
Attached is a patch (without documentation) which creates an atfork module for
Unix.
Apart from the atfork() function modelled on pthread_atfork() there is also a
get_fork_lock() function. This returns a recursive lock which is held whenever
a child process is
sbt added the comment:
mp_split_tests.patch splits up the test_multiprocessing.py:
test_multiprocessing_misc.py
miscellaneous tests which need not be run with multiple configurations
mp_common.py
testcases which should be run with multiple configurations
test_multiprocessing_fork.py
sbt added the comment:
Attached is an updated version of the mp_fork_exec.patch. This one is able to
reliably clean up any unlinked semaphores if the program exits abnormally.
--
Added file: http://bugs.python.org/file24297/mp_fork_exec.patch
sbt added the comment:
This is not specific to multiprocessing. It is really an issue with the
pickling of exceptions:
>>> import cPickle
>>> class BadExc(Exception):
... def __init__(self, a):
... '''Non-optional param in the constru
sbt added the comment:
Trivial 3 lines patch.
I guess there is still a race: if Ctrl-C is pressed after PyErr_CheckSignals()
is called but before PyObject_Str() then the printing of any exception can
still be suppressed.
--
Added file: http://bugs.python.org/file24177
sbt added the comment:
> I think calling PyErr_WriteUnraisable would be more appropriate than
> PyErr_Clear.
You mean just adding
PyErr_CheckSignals();
if (PyErr_Occurred())
PyErr_WriteUnraisable(NULL);
before the call to PyFile_WriteString()? That seems t
sbt added the comment:
Attached is a patch for the default branch.
Before calling PyFile_WriteString() the patch saves the current exception.
Then it calls PyErr_CheckSignals() and clears the current exception if any.
After calling PyFile_WriteString() the exception is restored.
I am not
sbt added the comment:
I think I have found the problem. PyTraceBack_Print() calls
PyFile_WriteString(), which calls PyFile_WriteObject(), which calls
PyObject_Str() which begins with
PyObject_Str(PyObject *v)
{
PyObject *res;
if (PyErr_CheckSignals())
return NULL
sbt added the comment:
I have tried the same with Python 2.7.1 on Linux. The problem is the same, but
one gets a partial traceback with no exception:
>>> import sys, testsigint
>>> testsigint.wait()
^CTraceback (most recent call last):
File "", line
New submission from sbt :
If SIGINT arrives while a function implemented in C is executing, then it
prevents the function from raising an exception unless the function first calls
PyErr_CheckSignals(). (If the function returns an object (instead of NULL)
then KeyboardInterrupt is raised as
sbt added the comment:
A simplified patch getting rid of _PyCFunction_GET_RAW_SELF().
--
Added file: http://bugs.python.org/file24068/method_qualname.patch
___
Python tracker
<http://bugs.python.org/issue13
sbt added the comment:
> I think this is indeed useful, but I'm tempted to go further and say we
> should make this the default - and only - behavior. This will probably
> break existing code that accidentaly relied the fact that the
> implementation uses a bare fork(),
sbt added the comment:
> sbt, have you been running the test suite before submitting patches?
> If not, then please do.
I ran it after I submitted. Sorry.
Here is another patch. It also makes sure that __self__ is reported as None
when METH_STATIC.
--
Added file
sbt added the comment:
> Which is fine. 'bytes' and byte literals were not introduced until
> 2.6 [1,2]. So *any* solution we come
> up with is for >= 2.6.
In 2.6 and 2.7, bytes is just an alias for str. In all 2.x versions with
codecs.encode, the result will be str
sbt added the comment:
I now realise latin_1_encode won't work because it returns a pair (bytes_obj,
length).
I have done a patch using _codecs.encode instead -- the pickles turn out to be
exactly the same size anyway.
>>> pickletools.dis(pickle.dumps(b"abc", 2))
sbt added the comment:
> - apparently you forgot to add BuiltinFunctionPropertiesTest in
> test_main()?
Yes. Fixed.
> - a static method keeps a reference to the type: I think it's ok, although
> I'm not sure about the consequences (Guido, would you have an idea?)
sbt added the comment:
> Ok, a couple of further (minor) issues:
> - I don't think AssertionError is the right exception type. TypeError
> should be used when a type mismatches (e.g. "not an unicode object");
> - you don't need to check for d_type being NULL, s
sbt added the comment:
Patch which add __qualname__ to builtin_function_or_method. Note that I had to
make a builtin staticmethod have __self__ be the type instead of None.
--
Added file: http://bugs.python.org/file23926/method_qualname.patch
sbt added the comment:
New version of the patch with tests and using _Py_IDENTIFIER.
--
Added file: http://bugs.python.org/file23922/descr_qualname.patch
___
Python tracker
<http://bugs.python.org/issue13
sbt added the comment:
> Note that extension (non-builtin) types will need to have their
> __qualname__ fixed before their methods' __qualname__ is usable:
>
> >>> collections.deque.__qualname__
> 'deque'
I'm confused. Isn't that the expected
sbt added the comment:
> I don't really know that much about pickle, but Antoine mentioned that
> 'bytearray'
> works fine going from 3.2 to 2.7. Given that, can't we just compose 'bytes'
> with
> 'bytearray'?
Yes, although it would
sbt added the comment:
Updated patch which fixes test.test_sys.SizeofTest. (It also adds __qualname__
to member descriptors and getset descriptors.)
--
Added file: http://bugs.python.org/file23914/descr_qualname.patch
___
Python tracker
<h
sbt added the comment:
I already have a patch for the descriptor types which lazily calculates the
__qualname__. However test.test_sys also needs fixing because it tests that
these types have expected sizes.
I have not got round to builtin_function_or_method though.
--
keywords
sbt added the comment:
> I suggest that array.array be changed in Python 2 to allow unicode strings
> as a typecode or that pickle detects array.array being called and fixes
> the call.
Interestingly, py3 does understand arrays pickled by py2. This appears to be
because py2 pi
sbt added the comment:
> sbt, the bug is not that the encoding is inefficient. The problem is we
> cannot unpickle bytes streams from Python 3 using Python 2.
Ah. Well you can do it using codecs.encode.
Python 3.3.0a0 (default, Dec 8 2011, 17:56:13) [MSC v.1500 32 bit (Intel)] on
New submission from sbt :
If you pickle an array object on python 3 the typecode is encoded as a unicode
string rather than as a byte string. This makes python 2 reject the pickle.
#
Python 3.3.0a0 (default, Dec 8 2011, 17:56:13) [MSC v.1500 32 bit
1 - 100 of 158 matches
Mail list logo