Adam Olsen [EMAIL PROTECTED] added the comment:
Works for me.
--
nosy: +Rhamphoryncus
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3154
___
___
Python
Adam Olsen [EMAIL PROTECTED] added the comment:
That's the same version I'm using. Maybe there's some font size differences?
I'm also on a 64-bit AMD.
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3154
Adam Olsen [EMAIL PROTECTED] added the comment:
I don't see a problem with skipping it, but if chroot is the problem,
maybe the chroot environment should be fixed to include /dev/shm?
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3111
Adam Olsen [EMAIL PROTECTED] added the comment:
I agree with your agreement.
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3111
___
___
Python-bugs-list mailing
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +Rhamphoryncus, jnoller
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3125
___
___
Python-bugs-list
Adam Olsen [EMAIL PROTECTED] added the comment:
Jesse, can you be more specific?
Thomas, do you have a specific command to reproduce this? It runs fine
if I do ./python -m test.regrtest -v test_multiprocessing test_ctypes.
That's with amaury's patch from 3100 applied
Adam Olsen [EMAIL PROTECTED] added the comment:
I see no common symbols between #3102 and #3092, so unless I missed
something, they shouldn't be involved.
I second the notion that multiprocessing's use of pickle is the
triggering factor. Registering so many types is ugly, and IMO it
shouldn't
Adam Olsen [EMAIL PROTECTED] added the comment:
Unfortunately, Py_INCREF is sometimes used in an expression (followed by
a comma). I wouldn't expect an assert to be valid there (and I'd want
to check ISO C to make sure it's portable, not just accepted by GCC).
I'd like if Py_INCREF and friends
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +Rhamphoryncus
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3107
___
___
Python-bugs-list mailing
Adam Olsen [EMAIL PROTECTED] added the comment:
Looking good.
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3114
___
___
Python-bugs-list mailing list
Adam Olsen [EMAIL PROTECTED] added the comment:
This is messy. File descriptors from other threads are leaking into
child processes, and if the write end of a pipe never gets closed in all
of them the read end won't get EOF.
I suspect cat's stdin is getting duplicated like that, but I haven't
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +Rhamphoryncus
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3088
___
___
Python-bugs-list mailing
Adam Olsen [EMAIL PROTECTED] added the comment:
I'm not sure that fix is 100% right - it fixes safety, but not
correctness. Wouldn't it be more correct to move all 3 into
temporaries, assign from tstate, then XDECREF the temporaries?
Otherwise you're going to expose just the value or traceback
Adam Olsen [EMAIL PROTECTED] added the comment:
Patch to add extra sanity checks to Py_INCREF (only if Py_DEBUG is set).
If the refcount is 0 or negative if calls Py_FatalError. This should
catch revival bugs such as this one a little more clearly.
The patch also adds a little more checking
Adam Olsen [EMAIL PROTECTED] added the comment:
Aww, that's cheating. (Why didn't I think of that?)
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3095
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +jnoller
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3100
___
___
Python-bugs-list mailing list
Adam Olsen [EMAIL PROTECTED] added the comment:
Well, my attempt at a patch didn't work, and yours does, so I guess I
have to support yours. ;)
Can you review my python-incref-from-zero patch? It verifies the
invariant that you need, that once an object hits a refcount of 0 it
won't get raised
Adam Olsen [EMAIL PROTECTED] added the comment:
Ahh, it seems gcmodule already considers the weakref to be reachable
when it calls the callbacks, so it shouldn't be a problem.
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3100
Adam Olsen [EMAIL PROTECTED] added the comment:
Another minor nit: if(current-ob_refcnt 0) should have a space
after the if. Otherwise it's looking good.
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3100
New submission from Adam Olsen [EMAIL PROTECTED]:
All these in multiprocessing.h are lacking suitable py/_py/Py/_Py/PY/_PY
prefixes:
PyObject *mp_SetError(PyObject *Type, int num);
extern PyObject *pickle_dumps;
extern PyObject *pickle_loads;
extern PyObject *pickle_protocol;
extern PyObject
New submission from Adam Olsen [EMAIL PROTECTED]:
multiprocessing.c currently has code like this:
temp = PyDict_New();
if (!temp)
return;
if (PyModule_AddObject(module, flags, temp) 0)
return;
PyModule_AddObject consumes the reference
Adam Olsen [EMAIL PROTECTED] added the comment:
The directory is irrelevant. C typically uses a flat namespace for
symbols. If python loads this library it will conflict with any other
libraries using the same name. This has happened numerous times in the
past, so there's no questioning
Adam Olsen [EMAIL PROTECTED] added the comment:
This doesn't look right. PyDict_SetItemString doesn't steal the
references passed to it, so your reference to flags will be leaked each
time. Besides, I think it's a little cleaner to INCREF it before call
PyModule_AddObject, then DECREF
New submission from Adam Olsen [EMAIL PROTECTED]:
$ ./python
Python 2.6a3+ (unknown, Jun 12 2008, 20:10:55)
[GCC 4.2.3 (Debian 4.2.3-1)] on linux2
Type help, copyright, credits or license for more information.
import multiprocessing.reduction
[55604 refs]
[55604 refs]
Segmentation fault
Adam Olsen [EMAIL PROTECTED] added the comment:
op is a KeyedRef instance. The instance being cleared from the module
is the multiprocessing.util._afterfork_registry.
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xb7d626b0 (LWP 2287)]
0x0809a131
Adam Olsen [EMAIL PROTECTED] added the comment:
More specific test case.
--
title: segfault after loading multiprocessing.reduction - segfault from
multiprocessing.util.register_after_fork
Added file: http://bugs.python.org/file10610/register_after_fork-crash.py
Adam Olsen [EMAIL PROTECTED] added the comment:
Very specific test case, eliminating multiprocessing entirely. It may
be an interaction between having the watched obj as its own key in the
WeakValueDictionary and the order in which the two modules are cleared.
Added file: http
Changes by Adam Olsen [EMAIL PROTECTED]:
Added file: http://bugs.python.org/file10612/inner.py
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3100
Changes by Adam Olsen [EMAIL PROTECTED]:
Removed file: http://bugs.python.org/file10610/register_after_fork-crash.py
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3100
Changes by Adam Olsen [EMAIL PROTECTED]:
--
title: segfault from multiprocessing.util.register_after_fork - segfault with
WeakValueDictionary and module clearing
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3100
Adam Olsen [EMAIL PROTECTED] added the comment:
Specific enough yet? Seems the WeakValueDictionary and the module
clearing aren't necessary.
A subclass of weakref is created. The target of this weakref is added
as an attribute of the weakref. So long as a callback is present
Changes by Adam Olsen [EMAIL PROTECTED]:
Removed file: http://bugs.python.org/file10612/inner.py
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3100
Changes by Adam Olsen [EMAIL PROTECTED]:
Removed file: http://bugs.python.org/file10611/outer.py
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3100
Adam Olsen [EMAIL PROTECTED] added the comment:
1. MyRef is released from the module as part of shutdown
2. MyRef's subtype_dealloc DECREFs its dictptr (not clearing it, as
MyRef is dead and should be unreachable)
3. the dict DECREFs the Dummy (MyRef's target)
4. Dummy's subtype_dealloc calls
Adam Olsen [EMAIL PROTECTED] added the comment:
Ahh, I missed a detail: when the callback is called the weakref has a
refcount of 0, which is ICNREFed to 1 when getting put in the args, then
drops down to 0 again when the args are DECREFed (causing it to get
_Py_ForgetReference to be called
Adam Olsen [EMAIL PROTECTED] added the comment:
Updated version of roudkerk's patch. Adds the new function to
pythread.h and is based off of current trunk.
Note that Parser/intrcheck.c isn't used on my box, so it's completely
untested.
roudkerk's original analysis is correct. The TLS
Adam Olsen [EMAIL PROTECTED] added the comment:
Incidentally, it doesn't seem necessary to reinitialize the lock. Posix
duplicates the lock, so if you hold it when you fork your child will be
able to unlock it and use it as normal. Maybe there's some non-Posix
behaviour or something even more
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +Rhamphoryncus
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2320
___
___
Python-bugs-list mailing
Adam Olsen [EMAIL PROTECTED] added the comment:
I agree, the argument for a syntax error is weak. It's more instinct
than anything else. I don't think I'd be able to convince you unless
Guido had the same instinct I do. ;)
___
Python tracker [EMAIL
Adam Olsen [EMAIL PROTECTED] added the comment:
PEP 3134's implicit exception chaining (if accepted) would require your
semantic, and your semantic is simpler anyway (even if the
implementation is non-trivial), so consider my objections to be dropped.
PEP 3134 also proposes implicit chaining
Adam Olsen [EMAIL PROTECTED] added the comment:
PEP 3134 gives reason to change it. __context__ should be set from
whatever exception is active from the try/finally, thus it should be
the inner block, not the outer except block.
This flipping of behaviour, and the general ambiguity, is why I
Adam Olsen [EMAIL PROTECTED] added the comment:
The inplace operators aren't right for weakref proxies. If a new object
is returned there likely won't be another reference to it and the
weakref will promptly be cleared.
This could be fixed with another property like _target, which by default
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +Rhamphoryncus
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3042
___
___
Python-bugs-list mailing
Adam Olsen [EMAIL PROTECTED] added the comment:
Does the PythonInterpreter option create multiple interpreters within a
single process, rather than spawning separate processes?
IMO, that API should be ripped out. They aren't truly isolated
interpreters and nobody I've asked has yet provided
Adam Olsen [EMAIL PROTECTED] added the comment:
Right, so it's only the python modules loaded as part of the app that
need to be isolated. You don't need the stdlib or any other part of the
interpreter to be isolated.
This could be done either by not using the normal import mechanism
(build
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +Rhamphoryncus
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3021
___
___
Python-bugs-list mailing
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +Rhamphoryncus
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2507
___
___
Python-bugs-list mailing
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +Rhamphoryncus
___
Python tracker [EMAIL PROTECTED]
http://bugs.python.org/issue3001
___
___
Python-bugs-list mailing
Adam Olsen [EMAIL PROTECTED] added the comment:
Surely remote proxies fall under what would be expected for a proxy
mixin? If it's in the stdlib it should be a canonical implementation,
NOT a reference implementation.
At the moment I can think up 3 use cases:
* weakref proxies
* lazy load
Adam Olsen [EMAIL PROTECTED] added the comment:
_deref won't work for remote objects, will it? Nor _unwrap, although
that starts to get fun.
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue643841
Adam Olsen [EMAIL PROTECTED] added the comment:
If it's so specialized then I'm not sure it should be in the stdlib -
maybe as a private API, if there was a user.
Having a reference implementation is noble, but this isn't the right way
to do it. Maybe as an example in Doc or in the cookbook
New submission from Adam Olsen [EMAIL PROTECTED]:
Patch allows any iterable (such as set and frozenset) to be used for
__all__.
I also add some blank lines, making it more readable.
--
files: python-importall.diff
keywords: patch
messages: 67104
nosy: Rhamphoryncus
severity: normal
Adam Olsen [EMAIL PROTECTED] added the comment:
tuples are already allowed for __all__, which breaks attempts to
monkey-patch it.
I did forget to check the return from PyObject_GetIter.
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2928
Adam Olsen [EMAIL PROTECTED] added the comment:
The patch for issue 1856 should fix the potential crash, so we could
eliminate that scary blurb from the docs.
--
nosy: +Rhamphoryncus
_
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1720705
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +Rhamphoryncus
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue689895
___
Python-bugs-list mailing list
Adam Olsen [EMAIL PROTECTED] added the comment:
Revised again. sets are only hashed after PyObject_Hash raises a TypeError.
This also fixes a regression in test_subclass_with_custom_hash. Oddly,
it doesn't show up in trunk, but does when my previous patch is applied
to py3k.
Added file: http
New submission from Adam Olsen [EMAIL PROTECTED]:
sets are based on dicts' code, so they have the same problem as bug
1517. Patch attached.
--
files: python-lookkeycompare.diff
keywords: patch
messages: 66829
nosy: Rhamphoryncus
severity: normal
status: open
title: lookkey should
Adam Olsen [EMAIL PROTECTED] added the comment:
There is no temporary hashability. The hash value is calculated, but
never stored in the set's hash field, so it will never become out of
sync. Modification while __hash__ or __eq__ is running is possible, but
for __eq__ that applies to any
Adam Olsen [EMAIL PROTECTED] added the comment:
Here's another approach to avoiding set_swap_bodies. The existing
semantics are retained. Rather than creating a temporary frozenset and
swapping the contents, I check for a set and call the internal hash
function directly (bypassing
Adam Olsen [EMAIL PROTECTED] added the comment:
new_buffersize returns a size_t. You should use SIZE_MAX instead
(although I don't see it used elsewhere in CPython, so maybe there's
portability problems.)
The call to _PyString_Resize implicitly casts the size_t to Py_ssize_t.
The check
Adam Olsen [EMAIL PROTECTED] added the comment:
The indentation still needs tweaking. You have only one tab where you
should have two, and one line uses a mix of tabs and spaces.
_
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1174606
Adam Olsen [EMAIL PROTECTED] added the comment:
Nevermind that the current implementation *is* broken, even if you
consider fixing it to be a low priority. Closing the report with a doc
tweak isn't right.
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org
Adam Olsen [EMAIL PROTECTED] added the comment:
So why doesn't set() in {} work? Why was PEP 351 rejected when it would
do this properly?
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2778
Changes by Adam Olsen [EMAIL PROTECTED]:
--
nosy: +Rhamphoryncus
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1348
__
___
Python-bugs-list mailing list
Unsubscribe
Adam Olsen [EMAIL PROTECTED] added the comment:
Cleaned up version of Amaury's patch. I stop releasing the GIL after
sys.exitfunc is called, which protects threads from the ensuing teardown.
I also grab the import lock (and never release it). This should prevent
the nasty issue with daemon
Adam Olsen [EMAIL PROTECTED] added the comment:
The intended use is unsafe. contains, remove, and discard all use it
for a lookup, which can't be fixed.
Upon further inspection, intersection_update is fine. Only a temporary
set (not frozenset!) is given junk, which I don't see as a problem
New submission from Adam Olsen [EMAIL PROTECTED]:
In 3.0, unittest's output has become line buffered. Instead of printing
the test name when it starts a test, then ok when it finishes, the
test name is delayed until the ok is printed. This makes it
unnecessarily hard to determine which test
Adam Olsen [EMAIL PROTECTED] added the comment:
Hrm, this behaviour exists in trunk as well. I must be confused about
the cause (but the patch still fixes it.)
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2787
Adam Olsen [EMAIL PROTECTED] added the comment:
I decided not to wait. Here's a patch.
Several of set's unit tests covered the auto-conversion, so I've
modified them.
--
keywords: +patch
Added file: http://bugs.python.org/file10217/python-setswap.diff
Adam Olsen [EMAIL PROTECTED] added the comment:
PEP 218 explicitly dropped auto-conversion as a feature. Why should
this be an exception?
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2778
New submission from Adam Olsen [EMAIL PROTECTED]:
set_swap_bodies() is used to cheaply create a frozenset from a set,
which is then used for lookups within a set. It does this by creating a
temporary empty frozenset, swapping its contents with the original set,
doing the lookup using
Adam Olsen [EMAIL PROTECTED] added the comment:
This bug was introduced by r53249, which was fixing bug #1566280.
Fixed by moving the WaitForThreadShutdown call into Py_Finalize, so all
shutdown paths use it. I also tweaked the name to follow local helper
function conventions.
Martin, since
Adam Olsen [EMAIL PROTECTED] added the comment:
Oh, and the patch includes a testcase. The current test_threading.py
doesn't work with older versions, but a freestanding version of this
testcase passes in 2.1 to 2.4, fails in 2.5 and trunk, and passes with
the patch
Adam Olsen [EMAIL PROTECTED] added the comment:
The original bug is not whether or not python reuses int objects, but
rather that an existing optimization disappears under certain
circumstances. Something is breaking our optimization.
The later cases where the optimization is simply gone
Adam Olsen [EMAIL PROTECTED] added the comment:
Unless someone has a legitimate use case for disabling small_int that
doesn't involve debugging (which I really doubt), I'd just assume it's
always in use.
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org
Adam Olsen added the comment:
Py_Main calls WaitForThreadShutdown before calling Py_Finalize, which
should wait for all these threads to finish shutting down before it
starts wiping their globals.
However, if SystemExit is raised (such as via sys.exit()), Py_Exit is
called, and it directly
Adam Olsen added the comment:
To put it another way: SystemExit turns non-daemon threads into daemon
threads. This is clearly wrong. Brent, could you reopen the bug?
_
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1722344
Adam Olsen added the comment:
I disagree. sys.exit() attempts to gracefully shutdown the interpreter,
invoking try/finally blocks and the like. If you want to truly force
shutdown you should use os.abort() or os._exit().
Note that, as python doesn't call a main function, you have to use
Adam Olsen added the comment:
Is there a guarantee that the with-statement is safe in the face of
KeyboardInterrupt? PEP 343 seems to imply it's not, using it as a
reason for why we need no special handling if __exit__ fails.
--
nosy: +Rhamphoryncus
Adam Olsen added the comment:
Yes, but there's no guarantee it will even reach the C function.
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1941
__
___
Python-bugs-list
Changes by Adam Olsen:
--
nosy: +Rhamphoryncus
_
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1303614
_
___
Python-bugs-list mailing list
Unsubscribe:
http
Adam Olsen added the comment:
Is the bug avoided if you import threading first and use it instead of
thread? I'd like to see thread removed in 3.0 (renamed to _thread or
the like.)
--
nosy: +Rhamphoryncus
_
Tracker [EMAIL PROTECTED]
http
Adam Olsen added the comment:
Hrm. It seems you're right. Python needs thread-local data to
determine if the GIL is held by the current thread. Thus, autoTLSkey
and all that need to never be torn down. (The check could be done much
more directly than the current PyThreadState_IsCurrent
Adam Olsen added the comment:
PyGILState_Ensure WOULD block forever if it acquired the GIL before
doing anything else.
The only way to make Py_Initialize callable after Py_Finalize is to make
various bits of the finalization into no-ops. For instance, it's
currently impossible to unload C
Adam Olsen added the comment:
Adam, did you notice the change on revision 59231 ? the
PyGILState_Ensure stuff should now remain valid during the
PyInterpreterState_Clear() call.
That doesn't matter. PyGILState_Ensure needs to remain valid *forever*.
Only once the process is completely gone
Adam Olsen added the comment:
I'm not sure I understand you, Gregory. Are arguing in favour of adding
extra logic to the GIL code, or against it?
I'm attaching a patch that has non-main thread exit, and it seems to fix
the test case. It doesn't fix the PyGILState_Ensure problems though.
Also
Adam Olsen added the comment:
In essence, it's a weakness of the POSIX API that it doesn't distinguish
synchronous from asynchronous signals.
The consequences of either approach seem minor though. I cannot imagine
a sane use case for catching SIGSEGV, but documentation changes should
Adam Olsen added the comment:
The warning in the documentation should be strengthened. Python simply
does not and cannot support synchronously-generated signals.
It is possible to send a normally synchronous signal asynchronously,
such as the os.kill() Ralf mentioned, so it's theoretically
Changes by Adam Olsen:
--
nosy: +rhamphoryncus
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1683
__
___
Python-bugs-list mailing list
Unsubscribe:
http
Changes by Adam Olsen:
--
nosy: +rhamphoryncus
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1676
__
___
Python-bugs-list mailing list
Unsubscribe:
http
Adam Olsen added the comment:
You have:
#define Py_NAN Py_HUGE_VAL * 0
I think this would be safer as:
#define Py_NAN (Py_HUGE_VAL * 0)
For instance, in code that may do a / Py_NAN.
Those manual string copies (*cp++ = 'n';) are ugly. Can't you use
strcpy() instead?
--
nosy
Adam Olsen added the comment:
Minor typo. Should be IEEE:
Return the sign of an int, long or float. On platforms with full IEE
754\n\
--
nosy: +rhamphoryncus
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1640
Adam Olsen added the comment:
The python API has the advantage that you can test for it at runtime,
avoiding a compile-time check. I don't know if this is significant though.
I don't see the big deal about a C API. All you need to do is call
PyImport_ImportModule(signal
Adam Olsen added the comment:
mwh, my threading patch is extensive enough and has enough overlap that
I'm not intimidating by fixing this. It's low on my list of priorities
though.
So far my tendency is to rip out multiple interpreters, as I haven't
seen what it wants to accomplish. It's
Adam Olsen added the comment:
Thanks georg.
Added file: http://bugs.python.org/file8925/python2.6-set_wakeup_fd3.diff
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1583
__Index: Doc/library/signal.rst
New submission from Adam Olsen:
This adds signal.set_wakeup_fd(fd), which allows a single library to be
woken when a signal comes in.
--
files: python2.6-set_wakeup_fd1.diff
messages: 58385
nosy: rhamphoryncus
severity: normal
status: open
title: Patch for signal.set_wakeup_fd
Added
Adam Olsen added the comment:
Guido, it looks like I can't alter the Assigned To field. You get the
Nosy List instead. ;)
--
nosy: +gvanrossum
__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1583
Adam Olsen added the comment:
version 2, adds to Doc/library/signal.rst. It also tweaks the
set_wakeup_fd's docstring.
I haven't verified that my formatting in signal.rst is correct.
Specifically, the '\0' should be checked.
Added file: http://bugs.python.org/file8916/python2.6
Adam Olsen added the comment:
The minimal patch doesn't initialize dummy_char or dummy_c. It's
harmless here, but please fix it. ;)
sizeof(dummy_char) will always be 1 (C defines sizeof as multiples of
char.) The convention seems to be hardcoding 1 instead
New submission from Adam Olsen:
(thanks go to my partner in crime, jorendorff, for helping flesh this out.)
lookdict calls PyObject_RichCompareBool without using INCREF/DECREF on
the key passed. It's possible for the comparison to delete the key from
the dict, causing its own argument
101 - 200 of 213 matches
Mail list logo