Richard Oudkerk added the comment:
Actually it is test.with_project_on_sys_path() in setuptools/commands/test.py
that does the save/restore of sys.modules. See
http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html
--
___
Python tracker
Richard Oudkerk added the comment:
I get the same hang on Linux with Python 3.2.
For Windows the documentation does warn against starting a process as a side
effect of importing a process. There is no explicit warning for Unix, but I
would still consider it bad form to do such things
Richard Oudkerk added the comment:
Here is a reproduction without using multiprocessing:
create.py:
import threading, os
def foo():
print(Trying import)
import sys
print(Import successful)
pid = os.fork()
if pid == 0:
try:
t = threading.Thread(target=foo
Changes by Richard Oudkerk shibt...@gmail.com:
--
type: crash - behavior
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15914
___
___
Python-bugs
Richard Oudkerk added the comment:
Python 3.2 has extra code in _PyImport_ReInitLock() which means that when a
fork happens as a side effect of an import, the main thread of the forked
process owns the import lock. Therefore other threads in the forked process
cannot import anything
Richard Oudkerk added the comment:
It looks like the problem was caused be the fix for
http://bugs.python.org/issue9573
I think the usage this was intended to enable is evil since one of the forked
processes should always be terminated with os._exit
Richard Oudkerk added the comment:
New patch which checks the refcount of the memoryview and bytes object after
calling readinto().
If either refcount is larger than the expected value of 1, then the data is
copied rather than resized.
--
Added file: http://bugs.python.org/file27211
Richard Oudkerk added the comment:
I think that's a useless precaution. The bytes object cannot leak
since you are using PyMemoryView_FromMemory(), which doesn't know about
the original object.
The bytes object cannot leak so, as you say, checking that refcount is
pointless. But the view
Richard Oudkerk added the comment:
Then the view owns a reference to the bytes object. But that does not
solve the problem that writable memoryviews based on a readonly object
might be hanging around.
How about doing
PyObject_GetBuffer(b, buf, PyBUF_WRITABLE);
view
Richard Oudkerk added the comment:
The current non-test uses of PyMemoryView_FromBuffer() are in
_io.BufferedReader.read(), _io.BufferedWriter.write(), PyUnicode_Decode().
It looks like they can each be made to leak a memoryview that references a
deallocated buffer. (Maybe the answer
Richard Oudkerk added the comment:
I am rather confused about the ownership semantics when one uses
PyMemoryView_FromBuffer().
It looks as though PyMemoryView_FromBuffer() steals ownership of the buffer
since, when the associated _PyManagedBufferObject is garbage collected,
PyBuffer_Release
Richard Oudkerk added the comment:
You would need to call memory_release(). Perhaps we can just expose it on the
C-API level as PyMemoryView_Release().
Should PyMemoryView_Release() release the _PyManagedBufferObject by doing
mbuf_release(view-mbuf) even if view-mbuf-exports 0?
Doing
Richard Oudkerk added the comment:
Are we talking about a big speedup here or could we perhaps just keep
the existing code?
I doubt it is worth the hassle. But I did want to know if there was a clean
way to do what I wanted.
--
___
Python
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15983
___
___
Python-bugs-list mailing
New submission from Richard Oudkerk:
A memoryview which does not own a reference to its base object can point to
freed or reallocated memory. For instance the following segfaults for me on
Windows and Linux.
import io
class File(io.RawIOBase):
def readinto(self, buf):
global
Richard Oudkerk added the comment:
I notice that queue.Queue.join() does not have a timeout parameter either.
Have you hit a particular problem that would be substantially easier with the
patch?
--
versions: -Python 2.7, Python 3.2, Python 3.3
Richard Oudkerk added the comment:
I've added a new patch, that implements a shared/exclusive lock as
described in my comments above, for the threading and multiprocessing
module.
The patch does not seem to touch the threading mode and does not come with
tests. I doubt
Richard Oudkerk added the comment:
@Sebastian: Both your patch sets are missing the changes to threading.py.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8800
Richard Oudkerk added the comment:
@richard: I'm sorry, but both of my patches contain changes to
'Lib/threading.py' and can be applied on top of Python 3.3.0. So can you
explain what do you mean, by missing the changes to threading.py?
I was reading the Rietveld review page
http
Richard Oudkerk added the comment:
With this, you are stuck with employing a context manager model only.
You loose the flexibility to do explicit acquire_read() or
acquire_write().
You are not restricted to the context manager model. Just use
selock.shared.acquire
Richard Oudkerk added the comment:
I think Sebastian's algorithm does not correctly deal with the non-blocking
case. Consider the following situation:
* Thread-1 successfully acquires exclusive lock.
Now num_got_lock == 1.
* Thread-2 blocks waiting for shared lock.
Will block until
Richard Oudkerk added the comment:
My previous comment applied to Sebastian's first patch. The second seems to
fix the issue.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8800
Richard Oudkerk added the comment:
I think you got that argument backwards. The simple greedy policy you
implement works well provided there are not too many readers. Otherwise,
the writers will be starved, since they have to wait for an oppertune
moment when no readers are active to get
Richard Oudkerk added the comment:
The unlock operation is the same, so now you have to arbitrarily pick one
of the lockd and chose release().
That depends on the implementation. In the three implementations on
http://en.wikipedia.org/wiki/Readers-writers_problem
the unlock
Richard Oudkerk added the comment:
The patch does not seem to walk the mro to look for slots in base classes.
Also, an instance with a __dict__ attribute may also have attributes stored in
slots.
BTW, copyreg._slotnames(cls) properly calculates the slot names for cls and
tries to cache them
Richard Oudkerk added the comment:
Multiprocessing: Because there is no way I know to share a list of
owning thread ids, this version is more limited
Why do you need a *shared* list? I think it should be fine to use a
per-process list of owning thread ids. So the current thread owns
Richard Oudkerk added the comment:
Well, what I am doing is more or less the equivalent of
return object.__slots__ if hasattr(object, '__slots') else object.__dict__
and this is coherent with the updated documentation. The one you
proposed is an alternative behavior; am I supposed
Richard Oudkerk added the comment:
This is from Python side. Did ht_slots field of PyHeapTypeObject does not
contain properly calculated slot names?
Looking at the code, it appears that ht_slots does *not* include inherited
slots.
--
___
Python
Richard Oudkerk added the comment:
That modifying the dict has no effect on the object is okay.
I have written vars(obj).update(...) before. I don't think it would be okay
to break that.
--
___
Python tracker rep...@bugs.python.org
http
Richard Oudkerk added the comment:
A search of googlecode shows 42 examples of vars(...).update compared to 3000
for .__dict__.update. I don't know if that is enough to worry about.
http://code.google.com/codesearch#search/q=vars\%28[A-Za-z0-9_]%2B\%29\.update%20lang:^python$type=cs
Richard Oudkerk added the comment:
Attached is a new version of Kristjan's patch with support for managers. (A
threading._RWLockCore object is proxied and wrapped in a local instance of a
subclass of threading.RWLock.)
Also I made multiprocessing.RWLock.__init__() use
Richard Oudkerk added the comment:
Fixed patch because I didn't test on Unix...
--
Added file: http://bugs.python.org/file27422/rwlock-sbt.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8800
Changes by Richard Oudkerk shibt...@gmail.com:
Removed file: http://bugs.python.org/file27421/rwlock-sbt.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8800
Richard Oudkerk added the comment:
This is more or less a duplicate of #15833 (although the errno mentioned there
is EIO instead of the more sensible EROFS).
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16139
Richard Oudkerk added the comment:
Kristjan: you seem to have attached socketserver.patch to the wrong issue.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8800
Richard Oudkerk added the comment:
_sha3 is not being built on Windows, so importing hashlib fails
import hashlib
ERROR:root:code for hash sha3_224 was not found.
Traceback (most recent call last):
File C:\Repos\cpython-dirty\lib\hashlib.py, line 109, in
__get_openssl_constructor
f
Richard Oudkerk added the comment:
6cf6b8265e57 and 8172cc8bfa6d have fixed the issue on my VM. I didn't
noticed the issue as I only tested hashlib with the release builds, not
the debug builds. Sorry for that.
Ah. I did not even notice there was _sha3.vcxproj.
Is there any particular
New submission from Richard Oudkerk:
ctypes.WinError() is defined as
def WinError(code=None, descr=None):
if code is None:
code = GetLastError()
if descr is None:
descr = FormatError(code).strip()
return WindowsError(code, descr)
Since
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: needs patch - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16169
Richard Oudkerk added the comment:
Cogen [4] uses ctypes wrapper.
In the code for the IOCP reactor only ctypes.FormatError() is used from ctypes.
It uses pywin32 instead.
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http
Richard Oudkerk added the comment:
Note that since Python 3.3, multiprocessing and _winapi make some use of
overlapped IO.
One can use _winapi.ReadFile() and _winapi.WriteFile() to do overlapped IO on
normal socket handles created using socket.socket
Richard Oudkerk added the comment:
Adding the IOCP functions to _winapi is straightforward -- see patch. Note
that there seems to be no way to unregister a handle from an IOCP.
Creating overlapped equivalents of socket.accept() and socket.connect() looks
more complicated. Perhaps
Changes by Richard Oudkerk shibt...@gmail.com:
Added file: http://bugs.python.org/file27516/iocp_example.py
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16175
Richard Oudkerk added the comment:
I think this is a duplicate of Issue #15646 which has been fixed in the 2.7 and
3.x branches.
If you run Lib/test/mp_fork_bomb.py you should get a RuntimeError with a
helpful message telling you to use the 'if __name__ == __main__' idiom
Richard Oudkerk added the comment:
select() other than being supported on all platforms has the advantage of
being simple and quick to use (you just call it once by passing a set of fds
and then you're done).
Do you mean at the C level? Wouldn't you just do
struct pollfd pfd = {fd
New submission from Richard Oudkerk:
Using VS2010 _socket links against ws2_32.lib but select links against
wsock32.lib.
Using VS2008 both extensions link against ws2_32.lib. It appears that the
conversion to VS2010 caused the regression.
(Compare #10295 and #11750.)
--
messages
Richard Oudkerk added the comment:
A preliminary patch is in attachment.
By default it uses select() but looks for ValueError (raised in case
FD_SETSIZE gets hit) and falls back on using poll().
This is the failure I get when running tests on Linux.
It is related to issue 3321 and I'm
Richard Oudkerk added the comment:
Using poll() by default is controversial for 2 reasons, I think:
#1 - a certain slowdown is likely to be introduced (I'll measure it)
With a single fd poll is a bit faster than select:
$ python -m timeit -s 'from select import select' 'select([0],[],[],0
Richard Oudkerk added the comment:
Still not getting what you refer to when you talk about 512 fds
problem.
Whether you get back the original objects or only their fds will depend on
whether some fd was larger than FD_SETSIZE.
--
___
Python
Richard Oudkerk added the comment:
This problem affects any single use of select(): instead of using an
ad-hoc wrapper in each module, it would probably make sense to add a
higher level selector class to the select module which would fallback on
the right syscall (i.e. poll() if available
Richard Oudkerk added the comment:
A use case for not using fork() is when your parent process opens some
system resources of some sort (for example a listening TCP socket). The
child will then inherit those resources, which can have all kinds of
unforeseen and troublesome consequences
Richard Oudkerk added the comment:
LGTM
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16284
___
___
Python-bugs-list mailing list
Unsubscribe
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: needs patch - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16295
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16307
___
___
Python-bugs-list mailing
Richard Oudkerk added the comment:
For updated code see http://hg.python.org/sandbox/sbt#spawn
This uses _posixsubprocess and closefds=True.
--
hgrepos: +157
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7292
___
___
Python-bugs-list mailing
Richard Oudkerk added the comment:
Maybe lru_cache() should have a key argument so you can specify a specialized
key function. So you might have
def _compile_key(args, kwds, typed):
return args
@functools.lru_cache(maxsize=500, key=_compile_key)
def _compile(pattern
Richard Oudkerk added the comment:
The patch does not apply correctly against vanilla Python 3.3. I would guess
that you are using a version of Python which has been patched to add mingw
support. Where did you get it from?
(In vanilla Python 3.3, setup.py does not contain any mention
Changes by Richard Oudkerk shibt...@gmail.com:
--
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16454
___
___
Python-bugs
Richard Oudkerk added the comment:
In Torsten's example
from . import moduleX
can be replaced with
moduleX = importlib.import_module('.moduleX', __package__) (*)
or
moduleX = importlib.import_module('package.moduleX')
If that is not pretty enough then perhaps the new
New submission from Richard Oudkerk:
On Windows the handle for a child process is not closed when the popen/process
object is garbage collected.
--
messages: 175629
nosy: sbt
priority: normal
severity: normal
stage: needs patch
status: open
title: process handle leak on windows
Richard Oudkerk added the comment:
Fixed in c574ce78cd61 and cb612c5f30cb.
--
resolution: - fixed
stage: needs patch - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16481
Richard Oudkerk added the comment:
pthread_atfork() allows the registering of three types of callbacks:
1) prepare callbacks which are called before the fork,
2) parent callbacks which are called in the parent after the fork
3) child callbacks which are called in the child after the fork.
I
Richard Oudkerk added the comment:
Note that Gregory P. Smith has written
http://code.google.com/p/python-atfork/
I also started a pure python patch but did not get round it posting it. (It
also implements the fork lock idea.) I'll attach it here.
How do you intend to handle
Richard Oudkerk added the comment:
IFF we are going to walk the hard and rocky road of exception handling,
then we are going to need at least four hooks and a register function that
takres four callables as arguments: register(prepare, error, parent,
child). Each prepare() call pushes
Richard Oudkerk added the comment:
http://hg.python.org/sandbox/sbt#spawn now contains support for starting
processes via a separate server process. This depends on fd passing support.
This also solves the problem of mixing threads and processes, but is much
faster than using fork+exec
Richard Oudkerk added the comment:
I am reopening this issue because 26bbff4562a7 only dealt with objects which
cannot be pickled. But CalledProcessError instances *can* be pickled: the
problem is that the resulting data cannot be unpickled.
Note that in Python 3.3 CalledProcessError can
Richard Oudkerk added the comment:
The example works correctly on 3.3 because of #1692335. I am not sure if it is
appropriate to backport it though.
This is a duplicate of #9400 which I have assigned to myself. (I had thought
it was already fixed.)
--
resolution: - duplicate
stage
Richard Oudkerk added the comment:
The patch is liable to break programs which explicitly call base constructors
because the constructor will be called more than once.
It also assumes that the __init__() method of all base classes should be called
with no arguments (other than self
Richard Oudkerk added the comment:
But I think the problem remains: do you agree that Classes should include
a super() call in their __init__ ?
No, I don't.
I think super() is an attractive nuisance which is difficult to use correctly
in an __init__() method, except in the trivial case
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16579
___
___
Python-bugs-list mailing
Richard Oudkerk added the comment:
Attached is an alternative patch which only touches selectmodule.c. It still
does not support WinXP.
Note that in this version register() and modify() do not ignore the POLLPRI
flag if it was *explicitly* passed. But I am not sure how best to deal
Richard Oudkerk added the comment:
Here is a version which loads WSAPoll at runtime. Still no tests or docs.
--
Added file: http://bugs.python.org/file28207/runtime_wsapoll.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org
New submission from Richard Oudkerk:
PollTests.poll_unit_tests() is not run because its method name does not begin
with test. It looks it was accidentally disabled when test_poll() was
converted to unittest in f56b25168142.
Renaming it test_poll_unit_tests() makes it run successfully
Richard Oudkerk added the comment:
It is a recent kernel and does support pipe2().
After some debugging it appears that a pipe handle created in Popen.__init__()
was being leaked to a forked process, preventing Popen.__init__() from
completing before the forked process did.
Previously
Richard Oudkerk added the comment:
Although it is undocumented, in python 3.4 you can control the prefix used by
doing
multiprocessing.current_process()._config['semprefix'] = 'myprefix'
in the main process at the beginning of the program.
Unfortunately, this will make the full prefix
Richard Oudkerk added the comment:
This was fixed for 3.3 in #1692335.
The issue of backporting to 2.7 is discussed in #17296.
--
resolution: - duplicate
status: open - closed
superseder: - Cannot unpickle classes derived from 'Exception'
type: crash - behavior
Richard Oudkerk added the comment:
So hopefully the bug should disappear entirely in future releases of tcl,
but for now you can work around it by building tcl without threads,
calling exec in between the fork and any use of tkinter in the child
process, or not importing tkinter until
Richard Oudkerk added the comment:
Fixed by #11161.
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
superseder: - futures.ProcessPoolExecutor hangs
___
Python tracker rep...@bugs.python.org
http://bugs.python.org
Richard Oudkerk added the comment:
If you have a pending overlapped operation then the associated buffer should
not be deallocated until that operation is complete, or else you are liable to
get a crash or memory corruption.
Unfortunately WinXP provides no reliable way to cancel a pending
Richard Oudkerk added the comment:
As close() on regular files, I would prefer to call explicitly cancel()
to control exactly when the overlapped operation is cancelled.
If you use daemon threads then you have no guarantee that the thread will ever
get a chance to explicitly call cancel
Richard Oudkerk added the comment:
I think the attached patch should fix it. Note that with the patch the
RuntimeError can probably only occur on Windows XP.
Shall I apply it?
--
keywords: +patch
Added file: http://bugs.python.org/file32597/dealloc-runtimeerror.patch
Richard Oudkerk added the comment:
On 13/11/2013 3:07pm, STINNER Victor wrote:
On Vista and later, yes, this is done in the deallocator using
CancelIoEx(), although there is still a warning.
I don't understand. The warning is emitted because an operating is not done
nor cancelled. Why
Richard Oudkerk added the comment:
Note that on Windows if you redirect the standard streams then *all*
inheritable handles are inherited by the child process.
Presumably the handle for f_w file object (and/or a duplicate of it) created in
one thread is accidentally leaked to the other child
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
type: behavior -
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16998
Richard Oudkerk added the comment:
Thanks for the patches.
Fixed in 7aabbe919f55, 11cafbe6519f.
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19338
Richard Oudkerk added the comment:
Hopefully the applied change will fix failure (or at least make this much less
likey).
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
type: - behavior
___
Python tracker rep
Richard Oudkerk added the comment:
I don't think the patch to the _test_multiprocessing will work. It defines
cls._Popen but I don't see how that would be used by cls.Pool to start the
processes.
I will have a think about a fix.
--
___
Python
Richard Oudkerk added the comment:
If the result of os.read() was stored in a Python daemon thread, the
memory should be released since the following changeset. Can someone
check if this issue still exist?
If a daemon thread is killed while it is blocking on os.read() then the bytes
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19599
Richard Oudkerk added the comment:
It would be nice to try this on another Vista machine - the WinXP, Win7,
Windows Server 2003 and Windows Server 2008 buildbots don't seem to show this
failure.
It looks as though the TimerOrWaitFired argument passed to the callback
registered
Richard Oudkerk added the comment:
Could you try this patch?
--
keywords: +patch
Added file: http://bugs.python.org/file32822/wait-for-handle.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19740
Richard Oudkerk added the comment:
Possibly related: ...
That looks unrelated since it does not involve wait_for_handle().
Unfortunately test_utils.run_briefly() offers few guarantees when using the
IOCP event loop.
--
___
Python tracker rep
Richard Oudkerk added the comment:
I've always had an implicit understanding that calls with timeouts may,
for whatever reason, return sooner than requested (or later!), and the
most careful approach is to re-check the clock again.
I've always had the implicit understanding that if I use
Richard Oudkerk added the comment:
From what I remember a proxy method will be thread/process-safe if the
referent's corresponding method is thread safe.
It should certainly be documented that the exposed methods of a proxied object
should be thread-safe
Richard Oudkerk added the comment:
I guess this is a case where we should not be trying to import the main module.
The code for determining the path of the main module (if any) is rather crufty.
What is sys.modules['__main__'] and sys.modules['__main__'].__file__ if you run
under nose
Richard Oudkerk added the comment:
So there are really two situations:
1) The __main__ module *should not* be imported. This is the case if you use
__main__.py in a package or if you use nose to call test_main().
This should really be detected in get_preparation_data() in the parent process
Richard Oudkerk added the comment:
I appear to be somehow getting child processes where __main__.__file__ is
set, but __main__.__spec__ is not.
That seems to be true for the __main__ module even when multiprocessing is not
involved. Running a file /tmp/foo.py containing
import sys
Richard Oudkerk added the comment:
Thanks for your hard work Nick!
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19946
___
___
Python-bugs-list
Richard Oudkerk added the comment:
On 19/12/2013 10:00 pm, Nick Coghlan wrote:
I think that needs to be fixed on the multiprocessing side rather than just
in the tests - we shouldn't create a concrete context for a start method
that isn't going to work on that platform. Finding that kind
601 - 700 of 736 matches
Mail list logo