eryksun added the comment:
I think the window to change this is closed. Section 16.16.2.7 in the docs
states that for c_int "no overflow checking is done". I'm sure there's code
that relies on that behavior, just as I'm sure there's code that relies on it
for the default conversion.
That said, it does shine a light on an existing inconsistency. c_int's setfunc
(i_set in cfield.c) truncates values that are larger than a C long. This is due
to its use of PyLong_AsUnsignedLongMask in 3.x and PyInt_AsUnsignedLongMask in
2.x.
For example:
from ctypes import CDLL, CFUNCTYPE, c_int, c_char_p
gsyms = CDLL(None) # POSIX
printf = CFUNCTYPE(c_int, c_char_p, c_int)(('printf', gsyms))
>>> n = printf(b'%#x\n', 2**64+1)
0x1
OTOH, without a prototype the default C int conversion fails in this case
because it calls PyLong_AsUnsignedLong or PyLong_AsLong.
>>> n = gsyms.printf(b'%#x\n', 2**64+1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ctypes.ArgumentError: argument 2: <class 'OverflowError'>: int too long to
convert
Section 16.16.1.3 states that "Python integers are passed as the platforms
default C int type, their value is masked to fit into the C type". I think
either ConvParam should change to use PyLong_AsUnsignedLongMask, or the docs
should state that the number has to be in the inclusive range LONG_MIN to
ULONG_MAX (e.g. -2**63 to 2**64-1 in 64-bit Linux), else ArgumentError is
raised.
----------
nosy: +eryksun
_______________________________________
Python tracker <[email protected]>
<http://bugs.python.org/issue24747>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com