Jesse Noller jnol...@gmail.com added the comment:
I've been thinking about this a bit, and I think raising an exception and
returning the amount of bytes read makes more sense then just hiding
it/eating the errors. Explicit Implicit in this case, at lease doing
this gives the controller a
John Ehresman j...@wingware.com added the comment:
Looking into this a bit more and reading the documentation (sorry, I
picked this up because I know something about win32 and not because I
know multiprocessing), it looks like a connection is supposed to be
message oriented and not byte
John Ehresman j...@wingware.com added the comment:
New patch which raises ValueError if WriteFile fails with
ERROR_NO_SYSTEM_RESOURCES. I wasn't able to reliably write a test since
putting the send_bytes in a try block seems to allow the call succeed.
This is probably OS, swap file size, and
Jesse Noller jnol...@gmail.com added the comment:
Patch applied in r71036 on python-trunk
--
resolution: - fixed
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3551
___
John Ehresman j...@wingware.com added the comment:
Attached is a patch, though I have mixed feelings about it. The OS
error can still occur even if a smaller amount is written in each
WriteFile call; I think an internal OS buffer fills up and the error is
returned if that buffer is full because
John Ehresman j...@wingware.com added the comment:
I'll try to work on a patch for this, but the reproduce.py script seems
to spawn dozens of sub-interpreters right now when run with trunk
(python 2.7) on win32
--
nosy: +jpe
___
Python tracker
Changes by John Ehresman j...@wingware.com:
Added file: http://bugs.python.org/file13493/reproduce.py
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3551
___
Jesse Noller jnol...@gmail.com added the comment:
John, can you try this on trunk:
from multiprocessing import *
latin = str
SENTINEL = latin('')
def _echo(conn):
for msg in iter(conn.recv_bytes, SENTINEL):
conn.send_bytes(msg)
conn.close()
conn, child_conn = Pipe()
p =
Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:
Really? Hmm weird...
I'm using Win2000, maybe are you using newer OS?
Or maybe larger data is needed. This guy says error occurs around 200MB.
(This is async IO though)
Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:
Ah, I forgot this. Process#set_daemon doesn't exist on trunk, I had to
use p.daemon = True instead.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3551
Jesse Noller jnol...@gmail.com added the comment:
John, try this new version
--
Added file: http://bugs.python.org/file13494/reproduce.py
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3551
John Ehresman j...@wingware.com added the comment:
Latest version works -- question is why prior versions spawned many
subprocesses. It's really another bug because prior version wasn't
hitting the write length limit.
--
title: multiprocessing.Pipe terminates with
Jesse Noller jnol...@gmail.com added the comment:
The if __name__ clause is actually well documented, see:
http://docs.python.org/library/multiprocessing.html#windows
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3551
John Ehresman j...@wingware.com added the comment:
It turns out that the original reproduce.py deadlocks if the pipe buffer
is smaller than message size -- even with a fix to the bug. Patch to
fix is coming soon.
--
Added file: http://bugs.python.org/file13498/reproduce.py
Changes by Jesse Noller jnol...@gmail.com:
--
type: resource usage - feature request
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3551
___
___
Changes by Jesse Noller jnol...@gmail.com:
--
priority: - normal
type: - resource usage
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3551
___
Changes by Jesse Noller jnol...@gmail.com:
--
assignee: - jnoller
nosy: +jnoller
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3551
___
___
New submission from Hirokazu Yamamoto [EMAIL PROTECTED]:
I noticed sometimes regrtest.py fails in test_multiprocessing.py
(test_connection) on win2000.
I could not reproduce error by invoking test_multiprocessing alone, but
finally I could do it by incresing 'really_big_msg' to 32MB or more.
I
Hirokazu Yamamoto [EMAIL PROTECTED] added the comment:
This is traceback when run reproducable.py.
Traceback (most recent call last):
File string, line 1, in module
File e:\python-dev\trunk\lib\multiprocessing\forking.py, line 341,
in main
prepare(preparation_data)
File
Hirokazu Yamamoto [EMAIL PROTECTED] added the comment:
After googling, ERROR_NO_SYSTEM_RESOURCES seems to happen
when one I/O size is too large.
And in Modules/_multiprocessing/pipe_connection.c, conn_send_string is
implemented with one call WriteFile(). Maybe this should be devided into
some
20 matches
Mail list logo