[issue16507] Patch selectmodule.c to support WSAPoll on Windows
Richard Oudkerk added the comment: It seems that the return code of WSAPoll() does not include the count of array items with revents == POLLNVAL. In the case where all of them are POLLNVAL, instead of returning 0 (which usually indicates a timeout) it returns -1 and WSAGetLastError() == WSAENOTSOCK. This does not match the MSDN documentation which claims that the return code is the number of descriptors for which revents is non-zero. But it arguably does agree with the FreeBSD and MacOSX man pages which say that it returns the number of descriptors that are ready for I/O. BTW, the implementation of select_poll() assumes that the return code of poll() (if non-negative) is equal to the number of non-zero revents fields. But select_have_broken_poll() considers a MacOSX poll() implementation to be good even in cases where this assumption is not true: static int select_have_broken_poll(void) { int poll_test; int filedes[2]; struct pollfd poll_struct = { 0, POLLIN|POLLPRI|POLLOUT, 0 }; if (pipe(filedes) 0) { return 1; } poll_struct.fd = filedes[0]; close(filedes[0]); close(filedes[1]); poll_test = poll(poll_struct, 1, 0); if (poll_test 0) { return 1; } else if (poll_test == 0 poll_struct.revents != POLLNVAL) { return 1; } return 0; } Note that select_have_broken_poll() == FALSE if poll_test == 0 and poll_struct.revents == POLLNVAL. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16616] test_poll.PollTests.poll_unit_tests() is dead code
Changes by Richard Oudkerk shibt...@gmail.com: -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16616 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15526] test_startfile crash on Windows 7 AMD64
Changes by Richard Oudkerk shibt...@gmail.com: -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15526 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16507] Patch selectmodule.c to support WSAPoll on Windows
Richard Oudkerk added the comment: Here is a new version with tests and docs. Note that the docs do not mention the bug mentioned in http://daniel.haxx.se/blog/2012/10/10/wsapoll-is-broken/ Maybe they should? Note that that bug makes it a bit difficult to use poll with tulip on Windows. (But one could restrict timeouts to one second and always check outstanding connect attempts using select() when poll() returns.) -- type: - enhancement versions: +Python 3.4 Added file: http://bugs.python.org/file28341/runtime_wsapoll.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16718] Mysterious atexit fail
Richard Oudkerk added the comment: When you run wy.py the wow module gets partially imported, and then garbage collected because it fails to import successfully. The destructor for the module replaces values in the module's __dict__ with None. So when the cleanup function runs you get the unexpected error. When you run wow.py directly, wow (i.e. the main module) will not be garbage collected, so _Cleanup is never replaced by None. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16718 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16718] Mysterious atexit fail
Richard Oudkerk added the comment: Things should be better in the future, when modules are cleared with true garbage collection. When is this future of which you speak? I am not sure whether it would affect performance, but a weakrefable subclass of dict could be used for module dicts. Then the module destructor could just save the module's dict in a WeakValueDictionary keyed by the id (assuming we are not yet shutting down). At shutdown the saved module dicts could be purged by replacing all values with None. Or maybe something similar is possible without using a dict subclass. -- versions: +Python 2.6 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16718 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16718] Mysterious atexit fail
Richard Oudkerk added the comment: See issue812369 for the shutdown procedure and modules cleanup. I am aware of that issue, but the original patch is 9 years old. Which is why I ask if/when it will actually happen. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16718 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16736] select.poll() converts long to int without checking for overflow
New submission from Richard Oudkerk: Relevant code: int timeout = 0, poll_result, i, j; ... tout = PyNumber_Long(tout); if (!tout) return NULL; timeout = PyLong_AsLong(tout); -- implicit cast to int -- messages: 177811 nosy: sbt priority: normal severity: normal status: open title: select.poll() converts long to int without checking for overflow type: behavior ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16736 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16736] select.poll() converts long to int without checking for overflow
Richard Oudkerk added the comment: Thanks. I will close. -- stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16736 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16718] Mysterious atexit fail
Richard Oudkerk added the comment: Perhaps the simplest thing would be to stop doing anything special when a module is garbage collected: the garbage collector can take care of any orphaned ref-cycles involving the module dict. Then at shutdown the remaining modules in sys.modules could have their dicts purged in the old way. This would be orthogonal to issue812369. In fact Armin's original post says that this is a change worth investigating, though his patch does not do it. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16718 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap accepts files 1 GB, but processes only 1 GB
Richard Oudkerk added the comment: I suspect that the size of the 5GB file is originally a 64 bit quantity, but gets cast unsafely to a 32 bit size_t to give 1GB. This is causing the miscalculations. There is no way to map all of a 5GB file in a 32 bit process -- 4GB is the maximum -- so any such attempt should raise an error. This does not prevent us from mapping *part* of a 5GB file. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap accepts files 1 GB, but processes only 1 GB
Richard Oudkerk added the comment: This bit looks wrong to me: if (offset - size PY_SSIZE_T_MAX) /* Map area too large to fit in memory */ m_obj-size = (Py_ssize_t) -1; Should it not be size - offset instead of offset - size? (offset and size are Py_LONG_LONG.) And there is no check that offset is non-negative. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap accepts files 1 GB, but processes only 1 GB
Richard Oudkerk added the comment: On 32 bit Unix mmap() will raise ValueError(mmap length is too large) in Marc's example. This is correct since Python's sequence protocol does not support indexes larger than sys.maxsize. But on 32 bit Windows, if length == 0 then the size check always passes, and the actual size mapped is the file size modulo 4GB. Fix for 3.x is attached with tests. -- keywords: +patch stage: needs patch - patch review Added file: http://bugs.python.org/file28444/mmap.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap accepts files 1 GB, but processes only 1 GB
Richard Oudkerk added the comment: This change is not backward compatible. Now user can mmap a larger file and safely access lower 2 GiB. With the patch it will fail. They should specify length=2GiB-1 if that is what they want. With length=0 you can only access the lower 2GiB if file_size % 4GiB 2GiB. If the file size is 4GiB+1 then you can only access *one byte* of the file. And if 2GiB file_size 4GiB then presumably len(data) will be negative (or throw an exception or fail an assertion -- I have not tested that case). I would not be surprised if crashes are possible. Basically if you had a large file and you did not hit a problem then it was Windows specific dumb luck. I see no point in retaining such unpredictable behaviour. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap accepts files 1 GB, but processes only 1 GB
Richard Oudkerk added the comment: New patch with same check for Unix. -- Added file: http://bugs.python.org/file28446/mmap.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap accepts files 1 GB, but processes only 1 GB
Richard Oudkerk added the comment: Isn't 2 GiB + 1 bytes mmap file enough for testing? Yes. But creating multigigabyte files is very slow on Windows. On Linux/FreeBSD test_mmap takes a fraction of a second, whereas on Windows it takes over 2 minutes. (Presumably Linux/FreeBSD is automatically creating a sparse file.) So adding assertions to an existing test is more convenient than creating another huge file just for these new tests. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap on Windows can mishandle files larger than sys.maxsize
Changes by Richard Oudkerk shibt...@gmail.com: -- title: mmap accepts files 1 GB, but processes only 1 GB - mmap on Windows can mishandle files larger than sys.maxsize type: enhancement - behavior ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8713] multiprocessing needs option to eschew fork() under Linux
Richard Oudkerk added the comment: Richard, apart from performance, what's the advantage of this approach over the fork+exec version? It is really just performance. For context running the unittests in a 1 cpu linux VM gives me fork: real0m53.868s user0m1.496s sys 0m9.757s fork+exec: real1m30.951s user0m24.598s sys 0m25.614s forkserver: real0m54.087s user0m1.572s # excludes descendant processes sys 0m2.336s # excludes descendant processes So running the unit tests using fork+exec takes about 4 times as much cpu time. Starting then immediately joining a trivial process in a loop gives fork:0.025 seconds/process fork+exec: 0.245 seconds/process forkserver: 0.016 seconds/process So latency is about 10 times higher with fork+exec. Because it seems more complicated, and although I didn't have a look a this last patch, I guess that most of the fork+exec version could be factorized with the Windows version, no? The different fork methods are now implemented in separate files. The line counts are 117 popen_spawn_win32.py 80 popen_fork.py 184 popen_spawn_posix.py 191 popen_forkserver.py I don't think any more sharing between the win32 and posix cases is possible. (Note that popen_spawn_posix.py implements a cleanup helper process which is also used by the forkserver method.) Since it's only intented to be used as a debugging/special-purpose replacement - it would probably be better if it could be made as simple as possible. Actually, avoiding the whole fork+threads mess is a big motivation. multiprocessing uses threads in a few places (like implementing Queue), and tries to do so as safely as possible. But unless you turn off garbage collection you cannot really control what code might be running in a background thread when the main thread forks. Also, as you've noted, FD passing isn't supported by all Unices out there (and we've had some reliability issues on OS-X, too). OSX does not seem to allow passing multiple ancilliary messages at once -- but you can send multiple fds in a single ancilliary message. Also, when you send fds on OSX you have to wait for a response from the other end before doing anything else. Not doing that was the cause of the previous fd passing failures in test_multiprocessing. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8713 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8713] multiprocessing needs option to eschew fork() under Linux
Richard Oudkerk added the comment: Numbers when running on Linux on a laptop with 2 cores + hyperthreading. RUNNING UNITTESTS: fork: real0m50.687s user0m9.213s sys 0m4.012s fork+exec: real1m9.062s user0m48.579s sys 0m6.648s forkserver: real0m50.702s user0m4.140s# excluding descendants sys 0m0.708s# excluding descendants LATENCY: fork: 0.0071 secs/proc fork+exec: 0.0622 secs/proc forkserver: 0.0035 secs/proc Still 4 times the cpu time and 10 times the latency. But the latency is far lower than in the VM. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8713 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8713] multiprocessing needs option to eschew fork() under Linux
Changes by Richard Oudkerk shibt...@gmail.com: Added file: http://bugs.python.org/file28461/8f08d83264a0.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8713 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8713] multiprocessing needs option to eschew fork() under Linux
Richard Oudkerk added the comment: The safest default would be fork+exec though we need to implement the fork+exec code as a C extension module or have it use subprocess (as I noted in the mb_fork_exec.patch review). That was an old version of the patch. In the branch http://hg.python.org/sandbox/sbt#spawn _posixsubprocess is used instead of fork+exec, and all unnecessary fds are closed. See http://hg.python.org/sandbox/sbt/file/8f08d83264a0/Lib/multiprocessing/popen_spawn_posix.py -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8713 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16802] fileno argument to socket.socket() undocumented
New submission from Richard Oudkerk: The actual signature is socket.socket(family=AF_INET, type=SOCK_STREAM, proto=0, fileno=None) but the documented signature is socket.socket([family[, type[, proto]]]) Should the fileno argument be documented or is it considered an implementation detail? -- messages: 178387 nosy: sbt priority: normal severity: normal status: open title: fileno argument to socket.socket() undocumented versions: Python 3.2, Python 3.3, Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16802 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16802] fileno argument to socket.socket() undocumented
Richard Oudkerk added the comment: The fileno argument looks like an implementation detail to me. It has at least one potential use. On Windows socket.detach() returns a socket handle but there is no documented way to close it -- os.close() will not work. The only way to close it that I can see (without resorting to ctypes) is with something like socket.socket(fileno=handle).close() -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16802 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16802] fileno argument to socket.socket() undocumented
Richard Oudkerk added the comment: There is an alternative (documented) interface: socket.fromfd(handle, socket.AF_INET, socket.SOCK_STREAM).close() socket.fromfd() duplicates the handle, so that does not close the original handle. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16802 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9586] warning: comparison between pointer and integer in multiprocessing build on Tiger
Changes by Richard Oudkerk shibt...@gmail.com: -- resolution: - fixed stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9586 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12105] open() does not able to set flags, such as O_CLOEXEC
Richard Oudkerk added the comment: Note that on Windows there is an O_NOINHERIT flag which almost corresponds to O_CLOEXEC on Linux. I don't think there is a need to use the win32 api. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12105 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Richard Oudkerk added the comment: A while ago I did write a PipeIO class which subclasses io.RawIOBase and works for overlapped pipe handles. (It was intended for multiprocessing and doing asynchronous IO with subprocess.) As it is it would not work with normal files because when you do overlapped IO on files you must manually track the file position. Yes, re-writing windows IO to direct API, without intemediate layer is still needed. What are the expected benefits? It would help feature #12105 to implement O_CLOEXEC flag using the lpSecurityAttributes argument. Isn't O_NOINHERIT the Windows equivalent of O_CLOEXEC? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Richard Oudkerk added the comment: Attached is a module for Python 3.3+ which subclasses io.RawIOBase. The constructor signature is WinFileIO(handle, mode=r, closehandle=True) where mode is r, w, r+ or w+. Handles can be created using _winapi.CreateFile(). Issues: - No support for append mode. - Truncate is not atomic. (Is atomicity supposed to be guaranteed?) - Not properly tested. -- Added file: http://bugs.python.org/file28544/winfileio.c ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Changes by Richard Oudkerk shibt...@gmail.com: Added file: http://bugs.python.org/file28545/test_winfileio.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16873] increase epoll.poll() maxevents default value, and improve documentation
Richard Oudkerk added the comment: Is this actually a problem? If events are arranged in a queue and epoll_wait() just removes the oldest events (up to maxevents) from that queue then there would be no problem with using a small value for maxevents. I don't *know* if that is the case, but I would consider epoll to be broken if it does not do something similar. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16873 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Richard Oudkerk added the comment: Attached is a patch which adds a winio module which is a replacement for io, but uses windows handles instead of fds. It reimplements FileIO and open(), and provides openhandle() and closehandle() as replacements for os.open() and os.close(). test_io has been modified to exercise winio (in addition to _io and _pyio) and all the tests pass. Note that some of the implementation (openhandle(), open(), FileIO.__init__()) is still done in Python rather than C. -- keywords: +patch Added file: http://bugs.python.org/file28590/winfileio.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16873] increase epoll.poll() maxevents default value, and improve documentation
Richard Oudkerk added the comment: I actually wrote a script to reproduce this issue: The program does *not* demonstrate starvation because you are servicing the resource represented by the starved duplicate fds before calling poll() again. You are creating thousands of duplicate handles for the same resource and then complaining that they do not behave independently! I tried modifing your program by running poll() in a loop, exiting when no more unseen fds are reported as ready. This makes the program exit immediately. So ready_writers = set(fd for fd, evt in ep.poll(-1, MAXEVENTS) if fd != r) seen_writers |= ready_writers becomes while True: ready_writers = set(fd for fd, evt in ep.poll(-1, MAXEVENTS) if fd != r) if ready_writers.issubset(seen_writers): break seen_writers |= ready_writers I still cannot see a problem with epoll(). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16873 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16873] increase epoll.poll() maxevents default value, and improve documentation
Richard Oudkerk added the comment: The fact that that the FDs are duped shouldn't change anything to the events reported: it works while the number of FDs is less than FD_SETSIZE (epoll_wait() maxevents argument). That assumes that epoll_wait() is supposed to return *all* ready fds. But that is not possible because maxevents is finite. If you want all events then obviously you may need to call epoll_wait() multiple times. I just used dup() to make it easier to test, but you'll probably get the same thing it your FDs were sockets connected to different endpoints. This is the part I disagree with -- I think it makes all the difference. Please try making such a modification. while True: ready_writers = set(fd for fd, evt in ep.poll(-1, MAXEVENTS) if fd != r) if ready_writers.issubset(seen_writers): break seen_writers |= ready_writers Of course it does, since the returned FDs are a subset of all the ready file descriptors. The point is precisely that, when there are more FDs ready than maxevents, some FDs will never be reported. The program can only terminate when the outer while all_writers - seen_writers: ... loop terminates. So seen_writers == all_writers, and every fd has been reported. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16873 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16873] increase epoll.poll() maxevents default value, and improve documentation
Richard Oudkerk added the comment: Here is a version which uses epoll to service a number of pipes which is larger than maxevents. (If NUM_WRITERS is too large then I get OSError: [Errno 24] Too many open files.) All pipes get serviced and the output is: Working with 20 FDs, 5 maxevents [5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43] [15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43] [25, 27, 29, 31, 33, 35, 37, 39, 41, 43] [35, 37, 39, 41, 43] The lists show the (sorted) unseen writers at each loop. -- Added file: http://bugs.python.org/file28594/test_epoll_2.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16873 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16873] increase epoll.poll() maxevents default value, and improve documentation
Richard Oudkerk added the comment: Yes, but the problem is that between two epoll_wait() calls, the readiness of the FDs can have changed: and if that happens, you'll get the same list over and over. If an fd *was* ready but isn't anymore then why would you want to know about it? Trying to use the fd will fail with EAGAIN. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16873 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Richard Oudkerk added the comment: I don't like the idea of a specific I/O module for an OS. Is the public API different? It was partly to make integration with the existing tests easier: _io, _pyio and winio are tested in parallel. Can't you reuse the io module? In what sense? I don't really know how the API should be exposed. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16853] add a Selector to the select module
Richard Oudkerk added the comment: Richard, in Tulip's WSAPoll code, it reads: class WindowsPollPollster(PollPollster): Pollster implementation using WSAPoll. WSAPoll is only available on Windows Vista and later. Python does not currently support WSAPoll, but there is a patch available at http://bugs.python.org/issue16507. Does this means that this code need the patch from issue #16507 to work? Yes. Also, I've read something about IOCP: is this a replacement for WSAPoll, are there plans to get it merged at some point to python (and if yes, would the select module be a proper home for this)? IOCP is not a replacement for WSAPoll (or select). Among other things it is not possible to use the current ssl module with IOCP (although Twisted manages to use IOCP with PyOpenSSL). Tulip should have IOCP support, so presumable when tulip is merged some support for IOCP will also be available in python. But I am not convinced that the select module is the proper home. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16853 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16261] Fix bare excepts in various places in std lib
Richard Oudkerk added the comment: try: _MAXFD = os.sysconf(SC_OPEN_MAX) -except: +except ValueError: _MAXFD = 256 os.sysconf() might raise OSError. I think ValueError is only raised if _SC_OPEN_MAX was undefined when the module was compiled. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16261 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16920] multiprocessing.connection listener gets MemoryError on recv
Richard Oudkerk added the comment: Why are you connecting to a multiprocessing listener with a raw socket? You should be using multiprocessing.connection.Client to create a client connection. Connection.send(obj) writes a 32 bit unsigned int (in network order) to the socket representing the length of the pickled data for obj, followed by the pickled data itself. Since you are doing a raw socket write, the server connection is misenterpreting the first 4 bytes of your message abcd as the length of the message. So the receiving end needs to allocate space for struct.unpack(!I, abcd)[0] == 1633837924 ~ 1.5Gb causing the MemoryError. -- nosy: +sbt resolution: - invalid stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16920 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Richard Oudkerk added the comment: Attached is a new patch which is implemented completely in C. It adds a WinFileIO class to the io module, which has the same API as FileIO except that: * It has a handle attribute instead of a fileno() method. * It has staticmethods openhandle() and closehandle() which are analogues of os.open() and os.close(). The patch also adds a keyword-only rawfiletype argument to io.open() so that you can write f = open(somefile, w, rawfiletype=WinFileIO) -- Added file: http://bugs.python.org/file28707/winfileio.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Changes by Richard Oudkerk shibt...@gmail.com: Removed file: http://bugs.python.org/file28544/winfileio.c ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Changes by Richard Oudkerk shibt...@gmail.com: Removed file: http://bugs.python.org/file28545/test_winfileio.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Changes by Richard Oudkerk shibt...@gmail.com: Removed file: http://bugs.python.org/file28590/winfileio.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Richard Oudkerk added the comment: Forgot to mention, the handles are non-inheritable. You can use _winapi.DuplicateHandle() to create an inheritable duplicate handle if you really need to. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16920] multiprocessing.connection listener gets MemoryError onrecv
Richard Oudkerk added the comment: If someone used regular sockets deliberately, they could crash multiprocessing server code deliberately. Any chance of doing a real message length check against the embedded message length check? You can do message = conn.recv_bytes(maxlength) if you want a length check -- OSError will be raised if the message is too long. But Listener() and Client() are *not* replacements for the normal socket API and I would not really advise using them for communication over a network. They are mostly used internally by multiprocessing -- and then only with digest authentication. All processes in the same program inherit the same randomly generated authentication key -- current_process().authkey. If you create a listener by doing listener = Listener(address, authenticate=True) then other processes from the same program can connect by doing conn = Client(address, authenticate=True) Without knowing the correct authentication key it is not possible to connect and do a DOS like you describe. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16920 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16955] multiprocessing.connection poll() always returns false
Richard Oudkerk added the comment: Thanks for the report. It seems to only affect Windows, and only when using sockets rather than pipes. Till this is fixed you could use temp = bool(multiprocessing.connection.wait([cl], 1)) instead of temp = cl.poll(1) As I mentioned on the other issue, I would not advise use of Listener() and Client() without using authentication -- you are probably better off using raw sockets and select(). -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16955 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10527] multiprocessing.Pipe problem: handle out of range in select()
Richard Oudkerk added the comment: The commits did not have the intended effect. They just define a _poll() function (and only on Windows) and it is not referenced anywhere else. I will look in to fixing this -- on 2.7 and 3.2 this will need to be done in the C code. -- resolution: fixed - status: closed - open ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10527 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16955] multiprocessing.connection poll() always returns false
Richard Oudkerk added the comment: This should be fixed now. -- resolution: - fixed stage: - committed/rejected status: open - closed type: - behavior ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16955 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10527] multiprocessing.Pipe problem: handle out of range in select()
Richard Oudkerk added the comment: What do you mean? The intent was to use poll() instead of select() anywhere available in order to avoid running out of fds. The change didn't affect Windows because as of right now select() is the only thing available. The change *only* effects Windows. Currently the code goes if sys.platform != 'win32': ... else: if hasattr(select, 'poll'): def _poll(fds, timeout): ... else: def _poll(fds, timeout): ... So _poll() is only defined when sys.platform == 'win32'. Furthermore, the _poll() function is never used anywhere: ConnectionBase.poll() uses Connection._poll(), which uses wait(), which uses select(). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10527 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10527] multiprocessing.Pipe problem: handle out of range in select()
Richard Oudkerk added the comment: It looks like the change to multiprocessing/connection.py committed does not match the one uploaded as issue10527-3.patch changeset 81174:e971a70984b8 1.1 --- a/Lib/multiprocessing/connection.py 1.2 +++ b/Lib/multiprocessing/connection.py 1.3 @@ -509,6 +509,27 @@ if sys.platform != 'win32': 1.4 return c1, c2 1.5 1.6 else: 1.7 +if hasattr(select, 'poll'): 1.8 +def _poll(fds, timeout): 1.9 +if timeout is not None: 1.10 +timeout = int(timeout) * 1000 # timeout is in milliseconds 1.11 +fd_map = {} 1.12 +pollster = select.poll() 1.13 +for fd in fds: 1.14 +pollster.register(fd, select.POLLIN) 1.15 +if hasattr(fd, 'fileno'): 1.16 +fd_map[fd.fileno()] = fd 1.17 +else: 1.18 +fd_map[fd] = fd 1.19 +ls = [] 1.20 +for fd, event in pollster.poll(timeout): 1.21 +if event select.POLLNVAL: 1.22 +raise ValueError('invalid file descriptor %i' % fd) 1.23 +ls.append(fd_map[fd]) 1.24 +return ls 1.25 +else: 1.26 +def _poll(fds, timeout): 1.27 +return select.select(fds, [], [], timeout)[0] 1.28 1.29 def Pipe(duplex=True): 1.30 ''' issue10527-3.patch: diff --git a/Lib/multiprocessing/connection.py b/Lib/multiprocessing/connection.py --- a/Lib/multiprocessing/connection.py +++ b/Lib/multiprocessing/connection.py @@ -861,6 +861,27 @@ return [o for o in object_list if o in ready_objects] else: +if hasattr(select, 'poll'): +def _poll(fds, timeout): +if timeout is not None: +timeout = int(timeout) * 1000 # timeout is in milliseconds +fd_map = {} +pollster = select.poll() +for fd in fds: +pollster.register(fd, select.POLLIN) +if hasattr(fd, 'fileno'): +fd_map[fd.fileno()] = fd +else: +fd_map[fd] = fd +ls = [] +for fd, event in pollster.poll(timeout): +if event select.POLLNVAL: +raise ValueError('invalid file descriptor %i' % fd) +ls.append(fd_map[fd]) +return ls +else: +def _poll(fds, timeout): +return select.select(fds, [], [], timeout)[0] def wait(object_list, timeout=None): ''' @@ -870,12 +891,12 @@ ''' if timeout is not None: if timeout = 0: -return select.select(object_list, [], [], 0)[0] +return _poll(object_list, 0) else: deadline = time.time() + timeout while True: try: -return select.select(object_list, [], [], timeout)[0] +return _poll(object_list, timeout) except OSError as e: if e.errno != errno.EINTR: raise -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10527 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Richard Oudkerk added the comment: Added some comments on Rietveld. The .fileno() method is missing. Can this cause a problem when the file is passed to stdlib functions? subprocess for example? Thanks. An older version of the patch had a fileno() method which returned the handle -- but that would have confused anything that expects fileno() to return a true fd. It would be possible to make fileno() lazily create an fd using open_osfhandle(). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Richard Oudkerk added the comment: What does this proposal bring exactly? Unless we are willing to completely replace fds with handles on Windows, perhaps not too much. (At one point I had assumed that that was the plan for py3k.) Although not advertised, openhandle() does have a share_flags parameter to control the share mode of the file. This makes it possible to delete files for which there are open handles. Mercurial needs a C extension to support this. regrtest could certainly benefit from such a feature. But one thing that I would at least like to do is create a FileIO replacement for overlapped pipe/socket handles. Then multiprocessing.Connection could be a simple wrapper round a file object, and most of the platform specific code in multiprocessing.connection can go away. The current patch does not support overlapped IO, but that could be added easily enough. (Overlapped IO for normal files might be more complicated.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16920] multiprocessing.connection listener gets MemoryErroronrecv
Richard Oudkerk added the comment: If you want to communicate between processes of the same progam, you are best off calling multiprocessing.Pipe() or multiprocessing.Queue() in the main process. Queues or connections can then be inherited by the child processes. Usually all communication is between the main process and its children: sibling-to-sibling communication is rare. I am trying to understand your reservations about using them for communication over a network Since Connection.recv() automatically unpickles the data it receives it is effected by the issue discussed here http://nadiana.com/python-pickle-insecure Basically, unpickling malicious data can trigger *any* command it wants using the shell. So you *must* use recv_bytes()/send_bytes() when dealing with unauthenticated connections. Over a network you *could* use authentication. But securely sharing the authentication key between all the hosts is far from straight forward. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16920 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16966] Publishing multiprocessing listener code
Richard Oudkerk added the comment: For the reasons I wrote in the other issue, I don't think this an approach to encourage. There was no need to create a new issue: if you post to a closed issue then people on the nosy list will still see your message. So I will close this issue. (Maybe wiki.python.org is the sort of thing you are looking for, but I have never visited it, and it does not seem to be available currently. I think it was recently compromised, so it may be down for a while. Anyway, I don't think your example code is suitable.) -- nosy: +sbt resolution: - rejected stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16966 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Richard Oudkerk added the comment: New patch reflecting Amaury's comments. -- Added file: http://bugs.python.org/file28737/winfileio.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12939] Add new io.FileIO using the native Windows API
Changes by Richard Oudkerk shibt...@gmail.com: Removed file: http://bugs.python.org/file28707/winfileio.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12939 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16998] Lost updates with multiprocessing.Value
Richard Oudkerk added the comment: I thought that access to the value field of Value instances was protected by locks to avoid lost updates. Loads and stores are both atomic. But += is made up of two operations, a load followed by a store, and the lock is dropped between the two. The same lack of atomicity applies when using += to modify an attribute of a normal python object in a multithreaded program. If you want an atomic increment you could try def do_inc(integer): with integer.get_lock(): integer.value += 1 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16998 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16998] Lost updates with multiprocessing.Value
Richard Oudkerk added the comment: I see. Then this is a documentation bug. The examples in the documentation use such non-thread-safe assignments (combined with the statement These shared objects will be process and thread safe.). Are you talking about this documentation: If lock is True (the default) then a new lock object is created to synchronize access to the value. If lock is a Lock or RLock object then that will be used to synchronize access to the value. If lock is False then access to the returned object will not be automatically protected by a lock, so it will not necessarily be “process-safe”. It only says that accesses are synchronized. The problem is that you were assuming that += involves a single access -- but that is not how python works. Where in the examples is there non-process-safe access? (Note that waiting for the only process which modifies a value to terminate using join() will prevent races.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16998 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16507] Patch selectmodule.c to support WSAPoll on Windows
Richard Oudkerk added the comment: That compiles (after hacking the line endings). One Tulip test fails, PollEventLooptests.testSockClientFail. But that's probably because the PollSelector class hasn't been adjusted for Windows yet (need to dig this out of the Pollster code that was deleted when switching to neologix's Selector). Sorry I did not deal with this earlier. I can make the modifications to PollSelector tommorrow. Just to describe the horrible hack: every time poll() needs to be called we first check if there are any registered async connects. If so then I first use select([], [], connectors) to detect any failed connections, and then use poll() normally. This does mean that to detect failed connections we must never use too large a timeout with poll() when there are outstanding connects. Of course one must decide what is an acceptable maximum timeout -- too short and you might damage battery life, too long and you will not get prompt notification of failures. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16507] Patch selectmodule.c to support WSAPoll on Windows
Richard Oudkerk added the comment: On 21/01/2013 5:38pm, Guido van Rossum wrote: This is a very good question to which I have no good answer. If it weren't for this, we could probably do away with the distinction between add_writer and add_connector, and a lot of code could be simpler. (Or is that distinction also needed for IOCP?) The distinction is not needed by IOCP. I am also not too sure that running tulip on WSAPoll() is a good idea, even if the select module provides it. OFF-TOPIC: Although it is not the optimal way of running tulip with IOCP, I have managed to implement IocpSelector and IocpSocket classes well enough to pass tulip's unittests (except for the ssl one). I did have to make some changes to the tests: selectors have a wrap_socket() method which prepares a socket for use with the selector. On Unix it just returns the socket unchanged, whereas for IocpSelector it returns an IocpSocket wrapper. I also had to make the unittests behave gracefully if there is a spurious wakeup, i.e. the socket is reported as readable, but trying to read fails with BlockingIOError. (Spurious wakeups are possible but very rare with select() etc.) It would be possible to make IocpSelector deal with pipe handles too. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16507] Patch selectmodule.c to support WSAPoll on Windows
Richard Oudkerk added the comment: On 21/01/2013 7:00pm, Guido van Rossum wrote: Regarding your IOCP changes, that sounds pretty exciting. Richard, could you check those into the Tulip as a branch? (Maybe a new branch named 'iocp'.) OK. It may take me a while to rebase them. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16507] Patch selectmodule.c to support WSAPoll on Windows
Richard Oudkerk added the comment: I have created an iocp branch. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16507] Patch selectmodule.c to support WSAPoll on Windows
Richard Oudkerk added the comment: It appears that Linux's spurious readiness notifications are a deliberate deviation from the POSIX standard. (They are mentioned in the BUGS section of the man page for select.) Should I just apply the following patch to the default branch? diff -r 3ef7f1fe286c tulip/events_test.py --- a/tulip/events_test.py Mon Jan 21 18:55:29 2013 -0800 +++ b/tulip/events_test.py Tue Jan 22 12:09:21 2013 + @@ -200,7 +200,12 @@ r, w = unix_events.socketpair() bytes_read = [] def reader(): -data = r.recv(1024) +try: +data = r.recv(1024) +except BlockingIOError: +# Spurious readiness notifications are possible +# at least on Linux -- see man select. +return if data: bytes_read.append(data) else: @@ -218,7 +223,12 @@ r, w = unix_events.socketpair() bytes_read = [] def reader(): -data = r.recv(1024) +try: +data = r.recv(1024) +except BlockingIOError: +# Spurious readiness notifications are possible +# at least on Linux -- see man select. +return if data: bytes_read.append(data) else: -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16507] Patch selectmodule.c to support WSAPoll on Windows
Richard Oudkerk added the comment: According to Alan Cox It's a design decision and a huge performance win. It's one of the areas where POSIX read in its strictest form cripples your performance. See https://lkml.org/lkml/2011/6/18/103 (For write ready, you can obviously have spurious notifications if you try to write more than what is available in the output socket buffer). Wouldn't you just get a partial write (assuming an AF_INET, SOCK_STREAM socket)? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16507] Patch selectmodule.c to support WSAPoll on Windows
Richard Oudkerk added the comment: For SOCK_STREAM, yes, not for SOCK_DGRAM I thought SOCK_DGRAM messages just got truncated at the receiving end. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17018] Inconsistent behaviour of methods waiting for child process
Changes by Richard Oudkerk shibt...@gmail.com: -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17018 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap on Windows can mishandle files larger than sys.maxsize
Richard Oudkerk added the comment: On 27/01/2013 8:27pm, Terry J. Reedy wrote: I agree we do not need to retain unpredictable 'dumb luck' -- in future versions. But in the absence of a clear discrepancy between doc and behavior (the definition of a bug) I believe breaking such code in a bugfix release would be contrary to current policy. Currently if you mmap a file with length 4GB+1, then you get an mmap of length 1. Surely that is a *huge* discrepancy between docs and behaviour. BTW, on 32 bit Windows it looks like the maximum size one can mmap in practice is about 1.1GB: PS python -c import mmap; m = mmap.mmap(-1, int(1.1*1024**3)) PS python -c import mmap; m = mmap.mmap(-1, int(1.2*1024**3)) Traceback (most recent call last): File string, line 1, in module WindowsError: [Error 8] Not enough storage is available to process this command -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap on Windows can mishandle files larger than sys.maxsize
Richard Oudkerk added the comment: On 27/01/2013 9:06pm, Serhiy Storchaka wrote: Every bugfix breaks some code. As a compromise I propose set m_obj-size = PY_SSIZE_T_MAX in case of overflow and emit a warning. Trying to allocate PY_SSIZE_T_MAX bytes always seems to fail with WindowsError: [Error 8] Not enough storage is available to process this command -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17097] baseManager serve_client() not check EINTR when recv request
Changes by Richard Oudkerk shibt...@gmail.com: -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17097 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap on Windows can mishandle files larger than sys.maxsize
Richard Oudkerk added the comment: Perhaps NEWS item needed for this change. Done. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16743] mmap on Windows can mishandle files larger than sys.maxsize
Changes by Richard Oudkerk shibt...@gmail.com: -- resolution: - fixed stage: patch review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16743 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15528] Better support for finalization with weakrefs
Richard Oudkerk added the comment: Richard, do you still want to push this forward? Otherwise I'd like to finalize the patch (in the other sense ;-). I started to worry a bit about daemon threads. I think they can still run while atexit functions are being run.* So if a daemon thread creates an atexit finalizer during shutdown it may never be run. I am not sure how much to worry about this potential race. Maybe a lock could be used to cause any daemon threads which try to create finalizers to block. * Is it necessary/desirable to allow daemon threads to run during shutdown. Maybe blocking thread switching at shutdown could cause deadlocks? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15528 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15528] Better support for finalization with weakrefs
Richard Oudkerk added the comment: On 14/02/2013 3:16pm, Antoine Pitrou wrote: Mmmh... thread switching is already blocked at shutdown: http://hg.python.org/cpython/file/0f827775f7b7/Python/ceval.c#l434 But in Py_Finalize(), call_py_exitfuncs() is called *before* _Py_Finalizing is set to a non-NULL value. http://hg.python.org/cpython/file/0f827775f7b7/Python/pythonrun.c#l492 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15528 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16246] Multiprocessing infinite loop on Windows
Changes by Richard Oudkerk shibt...@gmail.com: -- status: pending - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16246 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15528] Better support for finalization with weakrefs
Richard Oudkerk added the comment: In any case, I think it's just something we'll have to live with. Daemon threads are not a terrific idea in general. I agree. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15528 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17221] Resort Misc/NEWS
Richard Oudkerk added the comment: I did not realize there was a 'Extension Modules' section. I have been putting changes to C extensions in the 'Library' section instead. It looks like most people do the same as me. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17221 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17221] Resort Misc/NEWS
Richard Oudkerk added the comment: Was not it be yanked in 1fabff717ef4? Looks like it was reintroduced by this merge changeset: http://hg.python.org/cpython/rev/30fc620e240e -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17221 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15004] add weakref support to types.SimpleNamespace
Richard Oudkerk added the comment: Good, except that you have to add a gc.collect() call for the non-refcounted implementations. Better to use test.support.gc_collect(). -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15004 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17258] multiprocessing.connection challenge implicitly uses MD5
Richard Oudkerk added the comment: Banning md5 as a matter of policy may be perfectly sensible. However, I think the way multiprocessing uses hmac authentication is *not* affected by the collision attacks the advisory talks about. These depend on the attacker being able to determine for himself whether a particular candidate string is a solution. But with the way multiprocessing uses hmac authentication there is no way for the attacker to check for himself whether a candidate string has the desired hash: he does not know what the desired hash value is, or even what the hash function is. (The effective hash function, though built on top of md5, depends on the secret key.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17258 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17261] multiprocessing.manager BaseManager cannot return proxies from proxies remotely (when listening on '')
Changes by Richard Oudkerk shibt...@gmail.com: -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17261 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17273] multiprocessing.pool.Pool task/worker handlers are not fork safe
Richard Oudkerk added the comment: A pool should only be used by the process that created it (unless you use a managed pool). If you are creating long lived processes then you could create a new pool on demand. For example (untested) pool_pid = (None, None) def get_pool(): global pool_pid if os.getpid() != pool_pid[1]: pool_pid = (Pool(), os.getpid()) return pool_pid[0] -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17273 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17273] multiprocessing.pool.Pool task/worker handlers are not fork safe
Richard Oudkerk added the comment: Richard, are you suggesting that we close this, or do you see an actionable issue? (a plausible patch to the repository?) I skimmed the documentation and could not see that this restriction has been documented. So I think a documentation patch would be a good idea. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17273 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10527] multiprocessing.Pipe problem: handle out of range in select()
Changes by Richard Oudkerk shibt...@gmail.com: -- resolution: - fixed stage: commit review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue10527 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17223] Initializing array.array with unicode type code and buffer segfaults
Richard Oudkerk added the comment: The new test seems to be reliably failing on Windows: == FAIL: test_issue17223 (__main__.UnicodeTest) -- Traceback (most recent call last): File C:\Repos\cpython-dirty\lib\test\test_array.py, line 1075, in test_issue17223 self.assertRaises(ValueError, a.tounicode) AssertionError: ValueError not raised by tounicode -- -- nosy: +sbt status: closed - open ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17223 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17314] Stop using imp.find_module() in multiprocessing
Richard Oudkerk added the comment: I think this change will potentially make the main module get imported twice under different names when we transfer pickled data between processes. The current code (which is rather a mess) goes out of its way to avoid that. Basically the main process makes sys.modules['__mp_main__'] an alias for the main module, and other process import the parent's main module with __name__ == '__mp_main__' and make sys.modules['__main__'] an alias for that. This means that any functions/classes defined in the main module (from whatever process) will have obj.__module__ in {'__main__', '__mp_main__'} Unpickling such an object will succeed in any process without reimporting the main module. Attached is an alternative patch which is more like the original code and seems to work. (Maybe modifying loader.name is an abuse of the API.) -- Added file: http://bugs.python.org/file29274/mp-importlib.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17314 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17025] reduce multiprocessing.Queue contention
Richard Oudkerk added the comment: It looks like queues_contention.diff has the line obj = pickle.dumps(obj) in both _feed() and put(). Might that be why the third set of benchmarks was slower than the second? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17025 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17025] reduce multiprocessing.Queue contention
Richard Oudkerk added the comment: On 04/03/2013 8:01pm, Charles-François Natali wrote: It looks like queues_contention.diff has the line obj = pickle.dumps(obj) in both _feed() and put(). Might that be why the third set of benchmarks was slower than the second? _feed() is a Queue method, put() its SimpleQueue() counterpart. Am I missing something? No. I only looked at the diff and assumed both changes were for Queue. Since you marked issue 10886 as superceded, do you intend to do the pickling in put()? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17025 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17364] Multiprocessing documentation mentions function that doesn't exist
Changes by Richard Oudkerk shibt...@gmail.com: -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17364 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17367] subprocess deadlock when read() is interrupted
Richard Oudkerk added the comment: The change in your patch is in a Windows-only section -- a few lines before the chunk you can see _winapi.GetExitCodeProcess(). Since read() on Windows never fails with EINTR there is no need for _eintr_retry_call(). If you are using Linux then there must be some other reason for your deadlock. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17367 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17367] subprocess deadlock when read() is interrupted
Richard Oudkerk added the comment: BTW, on threads are only used on Windows. On Unix select() or poll() is used. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17367 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17367] subprocess deadlock when read() is interrupted
Richard Oudkerk added the comment: I will close the issue then. If you track the problem down to a bug in Python then you can open a new one. -- resolution: - invalid stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17367 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16895] Batch file to mimic 'make' on Windows
Richard Oudkerk added the comment: +1 To use Tools/builbot/*.bat doesn't the current directory have to be the main directory of the repository? Then I see no point in the -C argument: just set the correct directory automatically. I think make.bat should also support creation of non-debug builds. (Maybe have targets release and debug?) Tools/buildbot/build*.bat already calls external.bat and clean.bat. This currently makes the ready target unnecessary. However, I don't think build should be calling clean.bat (or external.bat). Perhaps you should just inline the necessary parts of Tools/buildbot/build*.bat. -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16895 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16895] Batch file to mimic 'make' on Windows
Richard Oudkerk added the comment: What does running 'kill-python before re-building python do? I have not seen it mentioned in the in the devguide or pcbuild/readme. It kills any currently running python(_d).exe processes. This is because Windows does not allow program or library files to be removed or overwritten while they are being used, potentially causing compilation failures. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16895 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17395] Wait for live children in test_multiprocessing
Richard Oudkerk added the comment: LGTM (although the warning is actually harmless). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17395 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16389] re._compiled_typed's lru_cache causes significant degradation of the mako_v2 bench
Richard Oudkerk added the comment: Which does give me a thought - perhaps lru_cache in 3.4 could accept a key argument that is called as key(*args, **kwds) to derive the cache key? (that would be a separate issue, of course) Agreed. I suggested the same in an earlier post. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16389 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17395] Wait for live children in test_multiprocessing
Richard Oudkerk added the comment: Why 1? This should be commented. The manager process will always be included in active_children(). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17395 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17399] test_multiprocessing hang on Windows, non-sockets
Richard Oudkerk added the comment: Does this happen every time you run the tests? (I don't see these errors.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17399 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17399] test_multiprocessing hang on Windows, non-sockets
Richard Oudkerk added the comment: Could you try the following program: import socket import multiprocessing import multiprocessing.reduction import multiprocessing.connection def socketpair(): with socket.socket() as l: l.bind(('localhost', 0)) l.listen(1) s = socket.socket() s.connect(l.getsockname()) a, _ = l.accept() return s, a def bar(s): print(s) s.sendall(b'from bar') if __name__ == '__main__': a, b = socketpair() p = multiprocessing.Process(target=bar, args=(b,)) p.start() b.close() print(a.recv(100)) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17399 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17399] test_multiprocessing hang on Windows, non-sockets
Richard Oudkerk added the comment: Now could you try the attached file? (It will not work on 2.7 because a missing socket.fromfd().) P.S. It looks like the error for 3.3 is associated with a file f:\python\mypy\traceback.py which presumably clashes with the one in the standard library. -- Added file: http://bugs.python.org/file29389/inherit_socket.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17399 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17399] test_multiprocessing hang on Windows, non-sockets
Richard Oudkerk added the comment: Both 3.2 and 3.3 give essentially the same traceback as 3.2 did before, both with installed python and yesterdays debug builds. It looks like on your machine socket handles are not correctly inherited by child processes -- I had assumed that they always would be. I suppose to fix things for 3.2 and earlier it would be necessary to backport the functionality of socket.socket.share() and socket.fromshare() from 3.3. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17399 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com