sbt added the comment:
It looks like Issue 3657 is really about builtin methods (i.e.
builtin_function_or_method objects where __self__ is not a module). It causes
no problem for normal python instance methods.
If we tried the getattr approach for builtin methods too then it should only be
sbt added the comment:
> One *dirty* trick I am thinking about would be to use something like
> array.tostring() to construct the byte string.
array('B', ...) objects are pickled using two bytes per character, so there
would be no advantage:
>>> pickle.dumps(arra
New submission from sbt :
The attached patch makes pickle use an object's __qualname__ attribute if
__name__ does not work.
This makes nested classes, unbound instance methods and static methods
picklable (assuming that __module__ and __qualname__ give the correct
"address").
sbt added the comment:
For builtin_function_or_method it seems obj.__qualname__ should be
obj.__self__.__qualname__ + '.' + obj.__name__
--
___
Python tracker
<http://bugs.python.o
sbt added the comment:
There are some callables which are missing __qualname__:
method_descriptor
wrapper_descriptor
builtin_function_or_method
For the descriptors, at least, obj.__qualname__ should be equivalent to
obj.__objclass__.__qualname__ + '.' + obj.__name__
sbt added the comment:
Is it intended that pickle will use __qualname__?
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issue13448>
___
___
Python-bug
sbt added the comment:
> It seems to me that ERROR_OPERATION_ABORTED is a "true" error, and so
> should raise an exception.
I guess so, although we do expect it whenever poll() times out. What exception
would be appropriate? BlockingIOErr
sbt added the comment:
> I have the feeling that if we have to call GetLastError() at the
> Python level, then there's something wrong with the APIs we're
> exposing from the C extension.
> I see you check for ERROR_OPERATION_ABORTED. Is there any situation
> where th
sbt added the comment:
Here is an updated patch (pipe_poll_fix.patch) which should be applied on top
of sigint_event.patch.
It fixes the problems with PipeConnection.poll() and Queue.empty() and makes
PipeListener.accept() use overlapped I/O. This should make all the pipe
releated blocking
sbt added the comment:
> Thanks. Who should I credit? "sbt"?
Yeah, thanks.
--
___
Python tracker
<http://bugs.python.org/issue13322>
___
___
Pytho
sbt added the comment:
> Thanks again. Just a nit: the tests should be in MiscIOTest, since
> they don't directly instantiate the individual classes. Also, perhaps
> it would be nice to check that the exception's "errno" attribute is
> EAGAIN.
Do
sbt added the comment:
Here is an updated patch which uses the real errno.
It also gets rid of the restore_pos argument of
_bufferedwriter_flush_unlocked() which is always set to false --
I guess buffered_flush_and_rewind_unlocked() is used instead.
--
Added file: http
sbt added the comment:
> Well, the sentinels argument, right now, is meant to be used
> internally. I don't think it's a good thing to document it,
> since I don't think it's a very clean API (I know, I introduced
> it :-))
Wouldn't a better alternative
sbt added the comment:
I notice that the patch changes rename() and link() to use
win32_decode_filename() to coerce the filename to unicode before using
the "wide" win32 api. (Previously, rename() first tried the wide api,
falling back to narrow if that failed; link() used wide i
sbt added the comment:
> Ouch. Were they only non-blocking codepaths?
Yes.
> raw_pos is the position which the underlying raw stream is currently
> at. It only needs to be modified when a successful write(), read()
> or seek() is done on the raw stream.
Do you mean self->raw_
sbt added the comment:
> Functions like os.execv() or os.readlink() are not deprecated because
> the underlying C function really uses a bytes API (execv and readlink).
Probably os.execv() should be implemented on Windows with _wexecv() instead of
_execv(). Likewise for other fun
sbt added the comment:
Testing the patch a bit more thoroughly, I found that data received from the
readable end of the pipe can be corrupted by the C implementation. This seems
to be because two of the previously dormant codepaths did not properly maintain
the necessary invariants.
I got
sbt added the comment:
The attached patch makes BufferedWrite.write() raise BlockingIOError when the
raw file is non-blocking and the write would block.
--
keywords: +patch
Added file: http://bugs.python.org/file23613/write_blockingioerror.patch
sbt added the comment:
> Another possibility would be that, since lines are usually reasonably
> sized, they should fit in the buffer (which is 8KB by default). So we
> could do the extra effort of buffering the data and return it once the
> line is complete: if the buffer fills
sbt added the comment:
Currently a BlockingIOError exception raised by flush() sets
characters_written to the number of bytes fushed from the internal
buffer. This is undocument (although there is a unit test which tests
for it) and causes confusion because characters_written has conflicting
sbt added the comment:
The third arg of BlockingIOError is used in two quite different ways.
In write(s) it indicates the number of bytes of s which have been "consumed"
(ie written to the raw file or buffered).
But in flush() and flush_unlocked() (in _pyio) it indicates the numbe
sbt added the comment:
> But what about the buggy readline() behaviour?
Just tell people that if the return value is a string which does not end in
'\n' then it might caused by EOF or EAGAIN. They can just call readline()
again t
sbt added the comment:
No one has suggested raising BlockingIOError and DISCARDING the data when a
partial read has occurred. The docs seem to imply that the partially read data
should be returned since they only say that BlockingIOError should be raised if
there is NOTHING to read
sbt added the comment:
Wierdly, it looks like BlockingIO is not raised anywhere in the code for the C
implementation of io.
Even more wierdly, in the Python implementation of io, BlockingIOError is only
ever raised by except clauses which have already caught BlockingIOError. So,
of course
sbt added the comment:
BufferedReader.readinto() should also raise BlockingIOError according to the
docs. Updated unittest checks for that also.
BTW, The documentation for BufferedIOBase.read() says that BlockingIOError
should be raised if nothing can be read in non-blocking mode
Changes by sbt :
--
type: -> behavior
versions: +Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python
3.4
___
Python tracker
<http://bugs.python.org/issu
New submission from sbt :
According to the the documentation, BufferedReader.read() and
BufferedWriter.write() should raise io.BlockingIOError if the file is in
non-blocking mode and the operation cannot succeed without blocking.
However, BufferedReader.read() returns None (which is what
sbt added the comment:
Small fix to patch.
--
Added file: http://bugs.python.org/file23142/mp_fork_exec.patch
___
Python tracker
<http://bugs.python.org/issue8
Changes by sbt :
Removed file: http://bugs.python.org/file23141/mp_fork_exec.patch
___
Python tracker
<http://bugs.python.org/issue8713>
___
___
Python-bugs-list mailin
sbt added the comment:
Here is a patch which adds the following functions:
forking_disable()
forking_enable()
forking_is_enabled()
set_semaphore_prefix()
get_semaphore_prefix()
To create child processes using fork+exec on Unix, call
forking_disable() at the beginning of the program
sbt added the comment:
You are not doing anything to stop the file object being garbage collected (and
therefore closed) before the mmap is created. Try
import os
import mmap
f = open(,"r+")
size = os.path.getsize())
data = mmap.mmap(f.fileno(), size)
--
sbt added the comment:
> This shouldn't be a problem in Python 3.3, where the Connection classes
> are reimplemented in pure Python.
What should not be a problem?
Changes to the implementation of Connection won't affect whether Queue.put()
raises an error immediate
sbt added the comment:
multiprocessing.util already has register_after_fork() which it uses for
cleaning up certain things when a new process (launched by multiprocessing) is
starting. This is very similar to the proposed atfork mechanism.
Multiprocessing assumes that it is always safe to
sbt added the comment:
I meant Issue 6721 (Locks in python standard library should be sanitized on
fork) not 6271.
--
___
Python tracker
<http://bugs.python.org/issue8
sbt added the comment:
Modifying an object which is already on a traditional queue can also change
what is received by the other thread (depending on timing). So Queue.Queue's
put() is not "atomic" either. Therefore I do not believe this behaviour is a
bug.
However the so
sbt added the comment:
mp_queue_pickle_in_main_thread.patch (against the default branch) fixes the
problem by doing the pickling in Queue.put(). It is version of a patch for
Issue 8037 (although I believe the behaviour complained about in Issue 8037 is
not an actual bug).
The patch also
sbt added the comment:
Buffer objects *are* picklable with protocol 2 (but not with earlier
protocols). Unfortunately, the result is not unpicklable.
This is not a problem with multiprocessing. (buffer seems to inherit
__reduce__ and __reduce_ex__ from object.)
Python 2.7.1+ (r271:86832
sbt added the comment:
I have noticed a few more problems.
* Because poll() isn't thread safe on Windows, neither is Queue.empty(). Since
a queue's pipe will never contain empty messages, this can be fixed easily by
using (a wrapper for) win32.PeekNamedPipe().
* PipeListener/
sbt added the comment:
sigint_event.patch is a patch to make
_multiprocessing.win32.WaitForMultipleObjects interruptible. It
applies directly on to default.
The patch also adds functions _PyOS_SigintEvent and _PyOS_IsMainThread
which are implemented in signalmodule.c and declared in
sbt added the comment:
> You are right, we need a manual reset *or* we must ensure that every
> user of _PyOS_SigintEvent only does so from the main thread.
On second thoughts, even using an auto-reset event, resetting the event before
waiting is unavoidable. Otherwise you are lia
sbt added the comment:
> Hmm, it seems to me that it should be done in _poll() instead.
> Otherwise, recv() will not be interruptible, will it?
Or maybe WaitForMultipleObjects() should be changed to also wait on
sigint_event if called by the main thread.
> Also, after looking at t
New submission from sbt :
multiprocessing.util._eintr_retry is only used to wrap select.select, but it
fails to recalculate timeouts.
Also, it will never retry the function it wraps because of a missing "import
errno".
I think it would be better to just implement the retrying
sbt added the comment:
pipe_interruptible.patch is a patch to support to making poll() interruptible.
It applies on top of pipe_poll_2.patch.
I am not sure what the guarantees are for when KeyboardInterrupt will be raised.
I would have done it a bit differently if I knew a good way to test
sbt added the comment:
> Also, what is the rationale for the following change:
>
> -elif timeout == 0.0:
> +elif timeout == 0.0 and nleft != 0:
> return False
If PeekNamedPipe() returns (navail, nleft) there are 3 cases:
1) navail &
sbt added the comment:
The attached patch hopefully fixes problems (1)-(5), but I have never used
overlapped I/O before. test_pipe_poll.py passes with these changes.
--
keywords: +patch
Added file: http://bugs.python.org/file22350/pipe_poll.patch
New submission from sbt :
There are some problems with the new Windows overlapped implementation
of PipeConnection in the default branch.
1) poll(0) can return False when an empty string is in the pipe: if
the next message in the pipe is b"" then PeekNamedPipe() returns
(0, 0
sbt added the comment:
Although Windows fds are not inheritable, the handles associated with fds can
be made inheritable.
A workaround for the fact fds are not inheritable is the following pattern:
1) The parent process converts the fd to a handle using _get_osfhandle(fd).
2) The parent
sbt added the comment:
krisvale wrote:
So, I suggest a change in the comments: Do not claim that the value is never
an underestimate, and explain how falsely returning a WAIT_TIMEOUT is safe and
only occurs when the lock is heavily contented.
Sorry for being so nitpicky but having this
Changes by sbt :
Removed file: http://bugs.python.org/file21335/locktimeout3.patch
___
Python tracker
<http://bugs.python.org/issue11618>
___
___
Python-bugs-list mailin
sbt added the comment:
krisvale wrote:
So, I suggest a change in the comments: Do not claim that the value is never
an underestimate, and explain how falsely returning a WAIT_TIMEOUT is safe and
only occurs when the lock is heavily contented.
Sorry for being so nitpicky but having this
sbt added the comment:
krisvale wrote
There is no barrier in use on the read part. I realize that this is a subtle
point, but in fact, the atomic functions make no memory barrier guarantees
either (I think). And even if they did, you are not using a memory barrier
when you read the
sbt added the comment:
sbt wrote:
-
I see your point. Still, I think we still may have a flaw: The statement that
(owned-timeouts) is never an under-estimate isn't true on modern architectures,
I think. The order of the atomic decrement operations in the code means
nothing and c
sbt added the comment:
> Btw, the locktimeout.patch appears to have a race condition.
> LeaveNonRecursiveMutex may SetEvent when there is no thread waiting
> (because a timeout just occurred, but the thread on which it happened
> is still somewhere around line #62 ). This wi
sbt added the comment:
Benchmarks (on an old laptop running XP without a VM) doing
D:\Repos\cpython\PCbuild>python -m timeit -s "from threading import Lock; l =
Lock()" "l.acquire(); l.release()"
100 loops, best of 3: 0.934 usec per loop
default:
sbt added the comment:
> If we are rolling our own instead of using Semaphores (as has been
> suggested for performance reasons) then using a Condition variable is
> IMHO safer than a custom solution because the correctness of that
> approach is so easily provable.
Assuming th
sbt added the comment:
Have you tried benchmarking it?
Interlocked functions are *much* faster than Win32 mutex/semaphores in the
uncontended case.
It only doubles the time taken for a "l.acquire(); l.release()" loop in Python
code, but at the C level it is probably 10 times slo
sbt added the comment:
First stab at a fix.
Gets rid of mutex->thread_id and adds a mutex->timeouts counter.
Does not try to prevent mutex->owned from overflowing.
When no timeouts have occurred I don't think it changes behaviour, and it uses
the same number of Interlo
New submission from sbt :
In thread_nt.h, when the WaitForSingleObject() call in
EnterNonRecursiveMutex() fails with WAIT_TIMEOUT (or WAIT_FAILED) the
mutex is left in an inconsistent state.
Note that the first line of EnterNonRecursiveMutex() is the comment
/* Assume that the thread waits
101 - 158 of 158 matches
Mail list logo