Richard Oudkerk added the comment:
I can't remember why I did not use fstat() -- probably it did not occur to me.
--
___
Python tracker
<http://bugs.python.org/is
Richard Oudkerk added the comment:
Updated version of the patch. Still needs docs.
--
Added file: http://bugs.python.org/file35902/memoryview-array-value.patch
___
Python tracker
<http://bugs.python.org/issue14
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue21664>
___
___
Python-bugs-list mailing list
Unsubscrib
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue21779>
___
___
Python-bugs-list mailing list
Unsubscrib
Richard Oudkerk added the comment:
Since there are no new features added to Python 2, this would be a Python 3
only feature.
I think for Python 3 it is better to concentrate on developing
concurrent.futures rather than multiprocessing.Pool
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue10850>
___
___
Python-bugs-list mailing list
Unsubscrib
Richard Kiss added the comment:
I reread more carefully, and I am in agreement now that I better understand
what's going on. Thanks for your patience.
--
nosy: +Richard.Kiss
___
Python tracker
<http://bugs.python.org/is
Richard Kiss added the comment:
The more I use asyncio, the more I am convinced that the correct fix is to keep
a strong reference to a pending task (perhaps in a set in the eventloop) until
it starts.
Without realizing it, I implicitly made this assumption when I began working on
my asyncio
Richard Oudkerk added the comment:
register_after_fork() is intentionally undocumented and for internal use.
It is only run when starting a new process using the "fork" start method
whether on Windows or not -- the "fork" in its name is a hint.
--
resolution:
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue20147>
___
___
Python-bugs-list mailing list
Unsubscrib
New submission from Richard s. Gordon:
Curses bug report for Python 2.7 and Python 3.2
My Python code outputs text properly with xterm and xterm-16color. It does not
work properly with xterm-88color and xterm-256color (after defining RGB color
pallet) when run on Python-2.7 and Python-3.2 on
Richard Marko added the comment:
Would be nice to have this commited as without this change
-if self.quitting:
-return # None
+if not self.botframe:
+self.botframe = frame
it's not possible to quit Bdb (and the code it's executing) as it ju
Richard Oudkerk added the comment:
If you use the short timeouts to make the wait interruptible then you can
use waitformultipleobjects (which automatically waits on an extra event
object) instead of waitforsingleobject.
--
___
Python tracker
<h
Richard Kiss added the comment:
For a reason that I don't understand, this patch to asyncio fixes the problem:
--- a/asyncio/tasks.py Mon Mar 31 11:31:16 2014 -0700
+++ b/asyncio/tasks.py Sat Apr 12 20:37:02 2014 -0700
@@ -49,7 +49,8 @@
def __next__(self):
return next(sel
New submission from Richard Kiss:
import asyncio
import os
def t1(q):
yield from asyncio.sleep(0.5)
q.put_nowait((0, 1, 2, 3, 4, 5))
def t2(q):
v = yield from q.get()
print(v)
q = asyncio.Queue()
asyncio.get_event_loop().run_until_complete(asyncio.wait([t1(q), t2(q)]))
When
Richard Oudkerk added the comment:
I added some comments.
Your problem with lost data may be caused by the fact you call ov.cancel() and
expect ov.pending to tell you whether the write has/will succeed. Instead you
should use ov.getresult() and expect either success or an "aborted&q
Richard Oudkerk added the comment:
Can you explain why you write in 512 byte chunks. Writing in one chunk should
not cause a deadlock.
--
___
Python tracker
<http://bugs.python.org/issue1191
Richard Oudkerk added the comment:
Could you try pickling and unpickling the result of func():
import cPickle
data = cPickle.dumps(func([1,2,3]), -1)
print cPickle.loads(data)
--
___
Python tracker
<http://bugs.python.org/issue21
Richard Oudkerk added the comment:
Ah, I misunderstood: you meant that it freezes/hangs, not that you used a
freeze tool.
--
___
Python tracker
<http://bugs.python.org/issue21
Richard Oudkerk added the comment:
I would guess that the problem is simply that LogisticRegression objects are
not picklable. Does the problem still occur if you do not use freeze?
--
___
Python tracker
<http://bugs.python.org/issue21
Richard Kiss added the comment:
You were right: adding a strong reference to each Task seems to have solved the
original problem in pycoinnet. I see that the reference to the global lists of
asyncio.tasks is a weakset, so it's necessary to keep a strong reference myself.
This does s
Richard Kiss added the comment:
I'll investigate further.
--
___
Python tracker
<http://bugs.python.org/issue21163>
___
___
Python-bugs-list mailing list
Richard Kiss added the comment:
I agree it's confusing and I apologize for that.
Background:
This multiplexing pattern is used in pycoinnet, a bitcoin client I'm developing
at <https://github.com/richardkiss/pycoinnet>. The BitcoinPeerProtocol class
multiplexes protoc
Changes by Richard Kiss :
--
title: asyncio Task Possibly Incorrectly Garbage Collected -> asyncio task
possibly incorrectly garbage collected
___
Python tracker
<http://bugs.python.org/issu
Changes by Richard Kiss :
--
hgrepos: -231
___
Python tracker
<http://bugs.python.org/issue21163>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Richard Kiss:
Some tasks created via asyncio are vanishing because there is no reference to
their resultant futures.
This behaviour does not occur in Python 3.3.3 with asyncio-0.4.1.
Also, doing a gc.collect() immediately after creating the tasks seems to fix
the problem
Richard Oudkerk added the comment:
I would recommended using _overlapped instead of _winapi.
I intend to move multiprocessing over in future.
Also note that you can do nonblocking reads by starting an overlapped read
then cancelling it immediately if it fails with "incomplete". You
Richard Oudkerk added the comment:
Using truncate() to zero extend is not really portable: it is only guaranteed
on XSI-compliant POSIX systems.
Also, the FreeBSD man page for mmap() has the following warning:
WARNING! Extending a file with ftruncate(2), thus creating a big
hole, and then
Richard Oudkerk added the comment:
Using asyncio and the IOCP eventloop it is not necessary to use threads.
(Windows may use worker threads for overlapped IO, but that is hidden from
Python.) See
https://code.google.com/p/tulip/source/browse/examples/child_process.py
for vaguely "e
Richard Oudkerk added the comment:
No, the argument will not go away now.
However, I don't much like the API which is perhaps why I did not get round to
documenting it.
It does have tests. Currently 'xmlrpclib' is the only supported alternative,
but JSON support could be add
Richard Oudkerk added the comment:
Testing the is_forking() requires cx_freeze or something similar, so it really
cannot go in the test suite.
I have tested it manually (after spending too long trying to get cx_freeze to
work with a source build).
It should be noted that on Unix freezing is
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: test needed -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Richard Oudkerk added the comment:
For reasons we all know unpickling unauthenticated data received over TCP is
very risky. Sending an unencrypted authentication key (as part of a pickle)
over TCP would make the authentication useless.
When a proxy is pickled the authkey is deliberately
Richard Oudkerk added the comment:
We should only wrap the exception with ExceptionWithTraceback in the process
case where it will be pickled and then unpickled.
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issu
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue20990>
___
___
Python-bugs-list mailing list
Unsubscrib
Richard Fothergill added the comment:
I'm getting these results on both:
Python 3.2.3 (default, Apr 10 2013, 06:11:55)
[GCC 4.6.3] on linux2
and
Python 2.7.3 (default, Apr 10 2013, 06:20:15)
[GCC 4.6.3] on linux2
The symptoms are exactly as Terrence described.
Nesting proxied containe
Richard Oudkerk added the comment:
I am not sure method_to_typeid and create_method were really intended to be
public -- they are only used by Pool proxies.
You can maybe work around the problem by registering a second typeid without
specifying callable. That can be used in method_to_typeid
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue20633>
___
___
Python-bugs-list mailing list
Unsubscrib
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue7503>
___
___
Python-bugs-list mailing list
Unsubscrib
Richard Oudkerk added the comment:
> Thanks Richard. The set_start_method() call will affect any process
> started from that time on? Is it possible to change idea at some point in
> the future?
You can use different start methods in the same program by creating different
Richard Oudkerk added the comment:
On Unix, using the fork start method (which was the only option till 3.4),
every sub process will incref every shared object for which its parent has a
reference.
This is deliberate because there is not really any way to know which shared
objects a
Richard Oudkerk added the comment:
BTW, I see little difference between 3.2 and the unpatched default branch on
MacOSX:
$ py-32/release/python.exe ~/Downloads/test_manager.py
0.0007331371307373047
8.20159912109375e-05
9.417533874511719e-05
8.082389831542969e-05
7.796287536621094e-05
Richard Oudkerk added the comment:
LGTM
--
___
Python tracker
<http://bugs.python.org/issue20540>
___
___
Python-bugs-list mailing list
Unsubscribe:
Richard Oudkerk added the comment:
This is expected. Killing processes which use shared locks is never going to
end well. Even without the lock deadlock, the data in the pipe would be liable
to be corrupted if a processes is killed while putting or getting from a queue.
If you want to be
Richard Oudkerk added the comment:
_overlapped is linked against the socket library whereas _winapi is not so
it can be bundled in with python3.dll.
I did intend to switch multiprocessing over to using _overlapped but I did
not get round to it.
Since this is a private module the names of
New submission from Richard Philips:
The reference to the pysqlite web page on:
http://docs.python.org/3.4/library/sqlite3.html
should be:
https://github.com/ghaering/pysqlite
--
assignee: docs@python
components: Documentation
messages: 208261
nosy: Richard.Philips, docs
Richard Oudkerk added the comment:
The following from the docs is wrong:
> ... module globals are no longer forced to None during interpreter
> shutdown.
Actually, in 3.4 module globals *sometimes* get forced to None during
interpreter shutdown, so the version the __del__ method can
Richard Oudkerk added the comment:
It is probably harmless then.
I don't think increasing the timeout is necessary -- the multiprocessing tests
already take a long time.
--
___
Python tracker
<http://bugs.python.org/is
Richard Oudkerk added the comment:
How often has this happened?
If the machine was very loaded then maybe the timeout was not enough time for
the semaphore to be cleaned up by the tracker process. But I would expect 1
second to be more than ample
Richard Oudkerk added the comment:
On 19/12/2013 10:00 pm, Nick Coghlan wrote:
> I think that needs to be fixed on the multiprocessing side rather than just
> in the tests - we shouldn't create a concrete context for a start method
> that isn't going to work on that platform
Richard Oudkerk added the comment:
Thanks for your hard work Nick!
--
___
Python tracker
<http://bugs.python.org/issue19946>
___
___
Python-bugs-list mailin
Richard Oudkerk added the comment:
> I appear to be somehow getting child processes where __main__.__file__ is
> set, but __main__.__spec__ is not.
That seems to be true for the __main__ module even when multiprocessing is not
involved. Running a file /tmp/foo.py containing
impo
Richard Oudkerk added the comment:
So there are really two situations:
1) The __main__ module *should not* be imported. This is the case if you use
__main__.py in a package or if you use nose to call test_main().
This should really be detected in get_preparation_data() in the parent process
Richard Oudkerk added the comment:
I guess this is a case where we should not be trying to import the main module.
The code for determining the path of the main module (if any) is rather crufty.
What is sys.modules['__main__'] and sys.modules['__main__'].__file__ if
New submission from Richard Milne:
Reading the pkzip APPNOTE and the documentation for the zipfile module, I was
under the impression that I could set the DEFLATE compression level, on a
per-file basis, for each file added to an archive, by setting the appropriate
bits in zipinfo.flag_bits
Richard Oudkerk added the comment:
>From what I remember a proxy method will be thread/process-safe if the
>referent's corresponding method is thread safe.
It should certainly be documented that the exposed methods of a proxied object
should be
Richard Oudkerk added the comment:
> I've always had an implicit understanding that calls with timeouts may,
> for whatever reason, return sooner than requested (or later!), and the
> most careful approach is to re-check the clock again.
I've always had the implicit understa
Richard Oudkerk added the comment:
> Possibly related: ...
That looks unrelated since it does not involve wait_for_handle().
Unfortunately test_utils.run_briefly() offers few guarantees when using the
IOCP event loop.
--
___
Python tracker
&l
Richard Oudkerk added the comment:
Could you try this patch?
--
keywords: +patch
Added file: http://bugs.python.org/file32822/wait-for-handle.patch
___
Python tracker
<http://bugs.python.org/issue19
Richard Oudkerk added the comment:
It would be nice to try this on another Vista machine - the WinXP, Win7,
Windows Server 2003 and Windows Server 2008 buildbots don't seem to show this
failure.
It looks as though the TimerOrWaitFired argument passed to the callback
registered
Changes by Richard Oudkerk :
--
resolution: -> fixed
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue19599>
___
___
Python-bugs-list
Richard Oudkerk added the comment:
> If the result of os.read() was stored in a Python daemon thread, the
> memory should be released since the following changeset. Can someone
> check if this issue still exist?
If a daemon thread is killed while it is blocking on os.read() then
Richard Oudkerk added the comment:
I don't think the patch to the _test_multiprocessing will work. It defines
cls._Popen but I don't see how that would be used by cls.Pool to start the
processes.
I will have a think about a fix.
--
Richard Oudkerk added the comment:
Hopefully the applied change will fix failure (or at least make this much less
likey).
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
type: -> behavior
___
Python tr
Richard Oudkerk added the comment:
Thanks for the patches.
Fixed in 7aabbe919f55, 11cafbe6519f.
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
type: behavior ->
___
Python tracker
<http://bugs.python
Richard Oudkerk added the comment:
Note that on Windows if you redirect the standard streams then *all*
inheritable handles are inherited by the child process.
Presumably the handle for f_w file object (and/or a duplicate of it) created in
one thread is accidentally "leaked" to
Richard Oudkerk added the comment:
On 13/11/2013 3:07pm, STINNER Victor wrote:
>> On Vista and later, yes, this is done in the deallocator using
>> CancelIoEx(), although there is still a warning.
>
> I don't understand. The warning is emitted because an operating is no
Richard Oudkerk added the comment:
I think the attached patch should fix it. Note that with the patch the
RuntimeError can probably only occur on Windows XP.
Shall I apply it?
--
keywords: +patch
Added file: http://bugs.python.org/file32597/dealloc-runtimeerror.patch
Richard Oudkerk added the comment:
> As close() on regular files, I would prefer to call explicitly cancel()
> to control exactly when the overlapped operation is cancelled.
If you use daemon threads then you have no guarantee that the thread will ever
get a chance to explicitly call
Richard Oudkerk added the comment:
If you have a pending overlapped operation then the associated buffer should
not be deallocated until that operation is complete, or else you are liable to
get a crash or memory corruption.
Unfortunately WinXP provides no reliable way to cancel a pending
Richard PALO added the comment:
Sure, attached is a simple test found on the internet, compiled with the
following reproduces the problem:
richard@devzone:~/src$ /opt/local/gcc48/bin/g++ -o tp tp.cpp -DSOLARIS
-I/opt/local/include/python2.7 -L/opt/local/lib -lpython2.7
In file included from
Richard PALO added the comment:
I don't believe the problem is a question solely of building the python
sources, but also certain dependent application sources...
I know of at least libreoffice building against python and this problem has
come up.
The workaround was to apply the
New submission from Richard PALO:
I'd like to have reopened this previous issue as it is still very much the case.
I believe as well that the common distros (I can easily verify OpenIndiana and
OmniOS) patch it out (patch file attached).
Upstream/oracle/userland-gate seems to as well.
Richard Oudkerk added the comment:
Fixed by #11161.
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
superseder: -> futures.ProcessPoolExecutor hangs
___
Python tracker
<http://bugs.python
Richard Oudkerk added the comment:
> So hopefully the bug should disappear entirely in future releases of tcl,
> but for now you can work around it by building tcl without threads,
> calling exec in between the fork and any use of tkinter in the child
> process, or not importing t
Richard Oudkerk added the comment:
This was fixed for 3.3 in #1692335.
The issue of backporting to 2.7 is discussed in #17296.
--
resolution: -> duplicate
status: open -> closed
superseder: -> Cannot unpickle classes derived from 'Exception'
type
Richard Oudkerk added the comment:
Although it is undocumented, in python 3.4 you can control the prefix used by
doing
multiprocessing.current_process()._config['semprefix'] = 'myprefix'
in the main process at the beginning of the program.
Unfortunately, this will
Richard Oudkerk added the comment:
It is a recent kernel and does support pipe2().
After some debugging it appears that a pipe handle created in Popen.__init__()
was being leaked to a forked process, preventing Popen.__init__() from
completing before the forked process did.
Previously the
Richard Oudkerk added the comment:
Given PEP 446 (fds are now CLOEXEC by default) I prepared an updated patch
where the fork lock is undocumented and subprocess no longer uses the fork
lock. (I did not want to encourage the mixing of threads with fork() without
exec() by exposing the fork
Richard Oudkerk added the comment:
This is a test of threading.Barrier rather than anything implemented directly
by multiprocessing.
Tests which involve timeouts tend to be a bit flaky. Increasing the length of
timeouts usually helps, but makes the tests take even longer.
How often have you
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: needs patch -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
New submission from Richard Neill:
It would be really nice if python supported mathematical operations on
dictionaries. This is widely requested (eg lots of stackoverflow queries), but
there's currently no simple way to do it.
I propose that this should work in the "obvious"
Richard Oudkerk added the comment:
Won't using a prepare handler mean that the parent and child processes will use
the same seed until one or other of them forks again?
--
___
Python tracker
<http://bugs.python.org/is
Richard Oudkerk added the comment:
The following uses socketpair() instead of pipe() for stdin, and works for me
on Linux:
diff -r 7d94e4a68b91 asyncio/unix_events.py
--- a/asyncio/unix_events.pySun Oct 20 20:25:04 2013 -0700
+++ b/asyncio/unix_events.pyMon Oct 21 17:15:19 2013 +0100
Richard Oudkerk added the comment:
> - now that FDs are non-inheritable by default, fork locks around
> subprocess and multiprocessing shouldn't be necessary anymore? What
> other use cases does the fork-lock have?
CLOEXEC fds will still be inherited by forked children.
&
Richard Oudkerk added the comment:
> Richard, do you have time to get your patch ready for 3.4?
Yes. But we don't seem to have concensus on how to handle exceptions. The
main question is whether a failed prepare callback should prevent the fork from
happenning, or just be
Richard Oudkerk added the comment:
> Is this patch still of relevance for asyncio?
No, the _overlapped extension contains the IOCP stuff.
--
___
Python tracker
<http://bugs.python.org/issu
Richard Oudkerk added the comment:
Would it make sense to use socketpair() instead of pipe() on AIX? We could
check for the "bug" directly rather than checking specifically for AIX.
--
___
Python tracker
<http://bugs.python.o
Richard Oudkerk added the comment:
> I guess we'll have to write platform-dependent code and make this an
> optional feature. (Essentially, on platforms like AIX, for a
> write-pipe, connection_lost() won't be called unless you try to write
> some more bytes to it.)
I
Richard Oudkerk added the comment:
I guess this should be clarified in the docs, but multiprocessing.pool.Pool is
a *class* whose constructor takes a context argument, where as
multiprocessing.Pool() is a *bound method* of the default context. (In
previous versions multiprocessing.Pool was a
Changes by Richard Oudkerk :
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issue10015>
___
___
Python-bugs-list mailing list
Unsubscribe:
Richard Oudkerk added the comment:
Actually, according to strace the call which blocks is
futex(0xb7839454, FUTEX_WAIT_PRIVATE, 1, NULL
--
___
Python tracker
<http://bugs.python.org/issue19
Richard Oudkerk added the comment:
I finally have a gdb backtrace of a stuck child (started using os.fork() not
multiprocessing):
#1 0xb76194da in ?? () from /lib/libc.so.6
#2 0xb6d59755 in ?? ()
from
/var/lib/buildslave/custom.murray-gentoo/build/build/lib.linux-i686-3.4-pydebug
Richard Oudkerk added the comment:
> I fixed the out of space last night. (Someday I'll get around to figuring
> out which test it is that is leaving a bunch of data around when it fails,
> but I haven't yet).
It looks like on the Debug Gentoo buildbot configure an
Richard Oudkerk added the comment:
I can reproduce the problem on the Non-Debug Gentoo buildbot using only
os.fork() and os.kill(pid, signal.SIGTERM). See
http://hg.python.org/cpython/file/9853d3a20849/Lib/test/_test_multiprocessing.py#l339
To investigate further I think strace and/or
Richard Oudkerk added the comment:
I think at module level you can do
if sys.platform != 'win32':
raise unittest.SkipTest('Windows only')
--
___
Python tracker
<http://bug
Richard Oudkerk added the comment:
On 16/10/2013 8:14pm, Guido van Rossum wrote:
> (2) I get this message -- what does it mean and should I care?
> 2 tests altered the execution environment:
> test_asyncio.test_base_events test_asyncio.test_futures
Perhaps threads from the Threa
Changes by Richard Oudkerk :
--
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue18999>
___
___
Python-bugs-list mailing list
Unsubscrib
201 - 300 of 1064 matches
Mail list logo