Armin Rigo added the comment:
That's why I don't really know which concept is the best: the core of
transactionmodule.c is just about one page in length, so there are only so many
ways to split this code between CPython and the module...
Attached the latest suggestion. I am al
Armin Rigo added the comment:
Antoine: we could take two lines from the current implementation of these hook
from stm/transactionmodule.c, and move them to the interpreter core. CPython
would end up with containing the core logic for transactions. A possible API
would look like that
Armin Rigo added the comment:
Ok, I followed Nick's suggestion, and I finally found out how to write the code
in order to avoid all (or most?) deadlocks without any change in the rest of
CPython. It requires a way to be sure that some callback function is invoked
_at the next cross-byt
Armin Rigo added the comment:
I suppose I'm fine either way, but do you have a reason for not exposing the
variables to the linker? Some Windows-ism were such exposed variables are
slower to access than static ones, maybe? The point is that they are kind of
"internal use only&q
Armin Rigo added the comment:
NB. I know that my stmmodule.c contains a gcc-ism: it uses a __thread global
variable. I plan to fix this in future versions :-)
--
___
Python tracker
<http://bugs.python.org/issue12
New submission from Armin Rigo :
Here is (attached) a minimal patch to the core trunk CPython to allow extension
modules to take over control of acquiring and releasing the GIL, as proposed
here:
http://mail.python.org/pipermail/python-dev/2011-August/113248.html
With this patch, it is
Armin Rigo added the comment:
FWIW, this case is tested in PyPy: http://paste.pocoo.org/show/397732/
--
nosy: +arigo
___
Python tracker
<http://bugs.python.org/issue9
Armin Rigo added the comment:
Hi :-) I did not report the two issues I found so far because I didn't finish
the PyPy implementation of CJK yet, and I'm very new to anything related to
codecs; additionally I didn't check Python 3.x, as I was just following the 2.7
source
Changes by Armin Rigo :
--
nosy: -arigo
___
Python tracker
<http://bugs.python.org/issue10399>
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Armin Rigo :
--
nosy: -arigo
___
Python tracker
<http://bugs.python.org/issue11477>
___
___
Python-bugs-list mailing list
Unsubscribe:
Armin Rigo added the comment:
Nick: we get a TypeError anyway if we do unsupported things like "lst += None".
It seems to me that you are confusing levels, unless you can point out a
specific place in the documentation that would say "never return NotImplemented
from __iad
Armin Rigo added the comment:
Note that I "fixed" one case in PyPy: if the class C has no __iter__() but only
__radd__(), and we call "somelist += C()". This was done simply by having
somelist.__iadd__(x) return NotImplemented in case x is not iterable, instead
of propa
Changes by Armin Rigo :
--
nosy: -arigo
___
Python tracker
<http://bugs.python.org/issue11339>
___
___
Python-bugs-list mailing list
Unsubscribe:
Armin Rigo added the comment:
Eric: that's wrong, it is a magic method. See for example "__oct__" in
Objects/typeobject.c. I'm not sure I understand why you would point this out,
though. A "SystemError: bad argument to internal function" or an "Asse
New submission from Armin Rigo :
The expression '%o' % x, where x is a user-defined instance, usually ignores
a user-defined __oct__() method. I suppose that's fine; assuming this is the
expected behavior, then the present issue is about the "usually" in my previous
New submission from Armin Rigo :
On 32 bits, there is no reason to get a 'long' here:
>>> int(float(sys.maxint))
2147483647L
>>> int(int(float(sys.maxint)))
2147483647
>>> int(float(-sys.maxint-1))
-2147483648L
>>> int(int(float(-sys.maxint-1)
New submission from Armin Rigo :
Should I report it here? The issue is with this issue tracker itself. If we
search for "2147483647" using the "search" box on this site (look up to the
right), then it works; but if we search for "2147483648" or a bigger number
Armin Rigo added the comment:
Martin: I kind of agree with you, although I guess that for pratical reasons if
you don't have a reasonable sys.getsizeof() implementation then it's better to
raise TypeError than return 0 (like CPython, which may raise "TypeError: Type
%.100s
Armin Rigo added the comment:
> The expectation is that it returns the memory footprint of the given
> object, and only it (not taking into account sharing, caching,
> dependencies or anything else).
It would be nice if this was a well-defined definition, but unfortunately it is
Armin Rigo added the comment:
Not for me (the last example I posted, on 2.7 head). But I will not fight for
this.
--
___
Python tracker
<http://bugs.python.org/issue1170
Armin Rigo added the comment:
Indeed.
--
resolution: -> duplicate
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue10638>
___
___
New submission from Armin Rigo :
There is an issue in PyArg_ParseTuple() when using nested tuple arguments: it
accepts a pure Python tuple-like argument, but it cannot work properly because
PyArg_ParseTuple() is supposed to return borrowed references to the objects.
For example, here is an
Armin Rigo added the comment:
> But this seems to me like a contrived example: how often in real
> code do people pass around these builtins, rather than calling
> them directly?
>From experience developing PyPy, every argument that goes "this theoretically
>breaks o
New submission from Armin Rigo :
Probably a typo in setobject.c.
The patch attached here does not really change anything but fixes the typo,
leading to slightly clearer code and avoiding one level of recursion. All
tests still pass.
--
components: Interpreter Core
files: diff1
Armin Rigo added the comment:
I propose that we first attempt to fix the crasher; depending on the solution
we might then either fix the doc or the code for _PyInstance_Lookup().
If no-one is willing to fix this bug I am fine to let it go. But somehow I am
sure that there is code *somewhere
New submission from Armin Rigo :
_PyInstance_Lookup() is documented as follows:
The point of this routine is that it never calls arbitrary Python
code, so is always "safe": all it does is dict lookups.
But of course dict lookups can call arbitrary Python code. This functi
New submission from Armin Rigo :
The attached example shows a case where the '_sre' module goes into an
instantaneous infinite memory leak. The bug (and probably the fix too) is
related to empty matches in the MIN_UNTIL operator ("+?", "*?"). It looks very
simi
Armin Rigo added the comment:
Added the two tests in Lib/test/leakers as r45389 (in 2006) and r84296 (now).
--
___
Python tracker
<http://bugs.python.org/issue1469
Armin Rigo added the comment:
All the missing type slots I reported can cause incorrect behavior very similar
to the one reported originally. For example (in Python 2.7):
class I(int): pass
i = I(123)
hex(i) => '0x7b'
hex(weakref.proxy(i))=&
Armin Rigo added the comment:
It's pretty trivial to turn my x.py into a unit test, of course.
--
___
Python tracker
<http://bugs.python.org/issue9134>
___
___
New submission from Armin Rigo :
The re module is buggy is rare cases; see attached example script.
The bug is caused by the macros LASTMARK_SAVE and LASTMARK_RESTORE which are
sometimes used without the extra code that does if (state->repeat)
{mark_save()/mark_restore()}.
The bug appears
New submission from Armin Rigo :
PyWeakref_GetObject(wref) returns a borrowed reference, but that's rather
dangerous. The fact that wref stays alive does not prevent the returned object
from being collected (by definition -- wref is a weak reference). That means
that either we
New submission from Armin Rigo :
Lambdas are a bit confused about their docstrings. Usually they don't have
any, but:
>>> (lambda x: "foo"+x).func_doc
'foo'
--
keywords: easy
messages: 101233
nosy: arigo
priority: low
severity: norma
New submission from Armin Rigo :
The __str__ method of some exception classes reads attributes without
typechecking them. Alternatively, the issue could be that the user is
allowed to set the value of these attributes directly, without
typecheck. The typechecking is only done when we create
Changes by Armin Rigo :
--
nosy: -arigo
___
Python tracker
<http://bugs.python.org/issue2459>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Armin Rigo added the comment:
...which does not really solve anything, as "if 0: yield" at
module-level is now just ignored instead of raising a SyntaxError (e.g.
like "if False: yield").
___
Python tracker
<http://bu
Armin Rigo added the comment:
Here is a summarizing implementation that accepts this interface:
if check_impl_detail(): # only on CPython (default)
if check_impl_detail(jython=True):# only on Jython
if check_impl_detail(cpython=False): # everywhere except on
Changes by Armin Rigo :
--
nosy: -arigo
___
Python tracker
<http://bugs.python.org/issue4753>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.pyth
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Attached struct-2.5-fix.diff. The tests still pass (both 32- and 64-bits).
--
keywords: +patch
Added file: http://bugs.python.org/file12326/struct-2.5-fix.diff
___
Python tracker <[EMAIL
Armin Rigo <[EMAIL PROTECTED]> added the comment:
FWIW, struct.pack("I", "whatever") produces "\x00\x00\x00\x00" too.
___
Python tracker <[EMAIL PROTEC
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Brett: in my experience the granularity is usually fine, and not coarse.
A class decorator doesn't look too useful. A function decorator is
useful, but not enough. We also need a flag that can be checked in the
middle of a larger
New submission from Armin Rigo <[EMAIL PROTECTED]>:
This patch contains a first step towards classifying CPython tests into
language tests and implementation-details tests. The patch is against
the 2.7 trunk's test_descr.py. It is derived from a similar patch that
we wrote fo
New submission from Armin Rigo <[EMAIL PROTECTED]>:
The attached example works in the __add__ and __getattribute__ cases on
CPython, but fails in the __getattr__ case. All three cases work as the
semantics say they should on Jython, IronPython and PyPy. It's
admittedly an obscu
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Ah, I should also mention that a fix of zipfile for 2.5 to no longer use
the deprecated feature (and thus no longer cause DeprecationWarnings)
also sounds like a good idea, in addition to the fix to the struct
New submission from Armin Rigo <[EMAIL PROTECTED]>:
struct.pack('L', -1) raises a DeprecationWarning since Python 2.5, as it
should. However, it also returns a different (and nonsensical) result
than Python <= 2.4 used to: it returns '\x00\x00\x00\x00' instead of
New submission from Armin Rigo <[EMAIL PROTECTED]>:
pdb in post-mortem mode is not able to walk the stack through frames
that belong to generators. The "up" command fails with the message
"Oldest frame", making it impossible to inspect the caller (or even know
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Maybe the file 'next-nevernull.patch' is not complete? I don't see any
change in the definition of PyIter_Check().
___
Python tracker <[EMAIL PROTECTED]>
<ht
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Hacking at typeobject.c should always be done extremely carefully. I
don't have time to review this patch as thouroughly as I think it needs
to be. (A quick issue is that it seems to break PyIter_Check() which
will always return tr
Armin Rigo <[EMAIL PROTECTED]> added the comment:
The same approach can be used to segfault many more places. See
http://svn.python.org/projects/python/trunk/Lib/test/crashers/iter.py .
--
nosy: +arigo
___
Python tracker <[EMAIL PROTECTE
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Maybe there is a better solution along the following line: conditionally
define Py_TPFLAGS_DEFAULT so that when compiling the Python core it
includes the Py_TPFLAGS_HAVE_VERSION_TAG, but when compiling extension
modules it does not. This
New submission from Armin Rigo <[EMAIL PROTECTED]>:
There is a bunch of obscure behavior caused by the use of
PyObject_GetAttr() to get special method from objects. This is wrong
because special methods should only be looked up in object types, not on
the objects themselves (i.e
Armin Rigo <[EMAIL PROTECTED]> added the comment:
> (Another note: the C-level errno and the TLS copy should also be
> synchronized when the C code invokes a Python callback.)
What I meant is what should occur when a pure Python function is used
as a callback. At this point ther
Armin Rigo <[EMAIL PROTECTED]> added the comment:
This was actually not a bug because the object being decref'ed
is guaranteed to be exactly a string or None, as told in the comment
about the 'name' field. So no user code could possibly run during
this Py_DECREF() call.
--
Armin Rigo <[EMAIL PROTECTED]> added the comment:
> However, even the TLS copy of errno may change because of this,
> if the finalizer of some object invokes ctypes, right?
Yes, it's annoying, but at least the Python programmer has a way to fix
this problem: he can save and res
Armin Rigo <[EMAIL PROTECTED]> added the comment:
I'm in favor of keeping set_errno(). Not only is it more like C, but
it's also future-proof against a user that will end up really needing
the feature. If we go for always setting errno to zero, we cannot
change that later a
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Alternatively, we can try to make ctypes "feel like" C itself:
ctypes.set_errno(0)
while True:
dirent = linux_c_lib.readdir(byref(dir))
if not dirent:
if ctypes.get_errno() == 0:
Changes by Armin Rigo <[EMAIL PROTECTED]>:
--
nosy: -arigo
_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1481036>
_
___
Python-bugs
Armin Rigo <[EMAIL PROTECTED]> added the comment:
This will break many existing applications, no? I can easily think of
examples of reasonable code that would no longer work as intended.
What's even worse, breakage might only show up in exceptional cases and
give obscure r
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Can you see if this simpler patch also gives speed-ups?
(predict_loop.diff)
--
nosy: +arigo
Added file: http://bugs.python.org/file9877/predict_loop.diff
__
Tracker <[EMAIL PROTECTE
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Thanks Antoine for the clarification.
The situation is that by now PyPy has found many many more bugs in
trying to use the compiler package to run the whole stdlib and
real-world applications. What I can propose is to extract what we ha
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Sorry, I don't think many PyPy coders care about microoptimizations in
the bytecode. To be honest we've stopped worrying about the stdlib
compiler package because no one seemed to care. I'm closing the present
patch as in
Armin Rigo <[EMAIL PROTECTED]> added the comment:
Attached patch for python trunk with tests.
It makes all three method types use the identity of their 'self'.
--
keywords: +patch
Added file: http://bugs.python.org/file9666/met
Armin Rigo <[EMAIL PROTECTED]> added the comment:
My mistake, you are right. I forgot about one of the many types that
CPython uses internally for built-in methods. Indeed:
>>> [].index == [].index
False
>>> [].__add__ == [].__add__
True
I can fix th
Armin Rigo added the comment:
I view this as a problem with Psyco, not with the user code.
An even deeper reason for which the general optimization would break
code is because it changes the lifetime of objects. For example, Psyco
contains specific, user-requested support to make sure the
Armin Rigo added the comment:
I suppose you are aware that performing this optimization in general
would break a lot of existing code that uses inspect.getstack() or
sys._getframe() to peek at the caller's local variables. I know this
because it's one thing that Psyco doesn't do
Armin Rigo added the comment:
We're finding such bugs because we are trying to reimplement ctypes in PyPy.
I guess your last question was "is it impossible to construct this 'bug'
without *reading* .contents?". The answer is that it doesn't change
much; you
New submission from Armin Rigo:
It's hard to tell for sure, given the lack of precise definition, but I
believe that the attached piece of code "should" work. What it does is
make p1 point to c_long(20). So ctypes should probably keep the
c_long(20) alive as long as p1 is
Armin Rigo added the comment:
Checked in as r60214.
--
resolution: -> accepted
status: open -> closed
_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.o
Armin Rigo added the comment:
I don't see in general how the patch can be kept compatible with
extension modules that change the tp_dict of arbitrary types. All I can
think of is a way to be safe against extension modules that only change
the tp_dict of their own non-heap types (i.e.
New submission from Armin Rigo:
Can you guess why importing the attached x.py does nothing, without
printing "hello" at all?
The real issue shown in that example is that 'return' and 'yield'
outside a function are ignored instead of giving a SyntaxError if th
Armin Rigo added the comment:
For the "too many arguments" case... Clearly (IMHO) it should also
be a TypeError. I have no clue about backward compatibility issues
though.
__
Tracker <[EMAIL PROTECTED]>
<http://bugs.p
Armin Rigo added the comment:
The patch is missing Py_DECREF(name). Also, I'd raise TypeError
instead of ValueError, just like function calls do in a similar
situation.
__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python
Armin Rigo added the comment:
The pure Python implementation we just wrote in PyPy is:
for name, arg in zip(names, args):
if name in kwds:
raise TypeError("duplicate value for argument %r" % (
name,))
self.__setattr__(name, arg)
for
Changes by Armin Rigo:
--
type: -> behavior
__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1831>
__
___
Python-bugs-list mailing list
Uns
New submission from Armin Rigo:
The constructor of ctypes structures should probably not silently accept
the bogus arguments shown in the attached example.
--
components: Extension Modules
files: bogus_args.py
messages: 59964
nosy: arigo, cfbolz, fijal
severity: normal
status: open
Armin Rigo added the comment:
The patch looks ok on 2.6, I recommend checking it there. (Due to line
number changes in socketmodule.c, the patch gives a warning, but it is
still otherwise up-to-date.)
--
nosy: +arigo
_
Tracker <[EMAIL PROTEC
Armin Rigo added the comment:
Obscure but reasonable. (I suspect you meant to say that py3k should
return the *unsigned* value for better compliance with the standard.)
__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/
Armin Rigo added the comment:
The C reference code in rfc1950 for Adler-32 and in rfc1952 for CRC-32
compute with and return "unsigned long" values. From this point of
view, returning negative values on 32-bit machines from CPython's zlib
module can be considered a bug. That o
Changes by Armin Rigo:
--
resolution: -> invalid
status: open -> closed
_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1285940>
_
__
New submission from Armin Rigo:
The functions zlib.crc32() and zlib.adler32() return a signed value
in the range(-2147483648, 2147483648) on 32-bit platforms, but an
unsigned value in the range(0, 4294967296) on 64-bit platforms. This
means that half the possible answers are numerically
Armin Rigo added the comment:
We need to start from PyFile_WriteString() and PyFile_WriteObject()
and PyObject_Print(), which are what the PRINT_* opcodes use, and make
sure there is no single fprintf() or fputs() that occurs with the GIL
held. It is not enough to fix a few places that could
New submission from Armin Rigo:
The tp_print slots, used by 'print' statements, don't release the GIL
while writing into the C-level file. This is not only a missing
opportunity, but can cause unexpected deadlocks, as shown in the
attached file.
--
components: Interpr
Armin Rigo added the comment:
Thanks for the patch. Checked in:
* r58004 (trunk)
* r58005 (release25-maint)
--
nosy: +arigo
resolution: -> accepted
status: open -> closed
_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.o
Armin Rigo added the comment:
Assigning to Brett instead of me (according to the tracker history).
--
assignee: arigo -> brett.cannon
_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/
301 - 384 of 384 matches
Mail list logo