Question:  should the CPython source compile cleanly and work
correctly on (mostly ancient or hypothetical) machines that use
ones' complement or sign-and-magnitude to represent signed integers?

I'd like to explicitly document and make use of the following assumptions
about the underlying C implementation for CPython.  These assumptions
go beyond what's guaranteed by the various C and C++ standards, but
seem to be almost universally satisfied on modern machines:

 - signed integers are represented using two's complement
 - for signed integers the bit pattern 100....000 is not a trap representation;
   hence INT_MIN = -INT_MAX-1, LONG_MIN = -LONG_MAX-1, etc.
 - conversion from an unsigned type to a signed type wraps modulo
   2**(width of unsigned type).

Any thoughts, comments or objections?  This may seem academic
(and perhaps it is), but it affects the possible solutions to e.g.,

http://bugs.python.org/issue7406

The assumptions listed above are already tacitly used in various bits
of the CPython source (Objects/intobject.c notably makes use of the
first two assumptions in the implementations of int_and, int_or and
int_xor), while other bits of code make a special effort to *not* assume
more than the C standards allow.  Whatever the answer to the initial
question is, it seems worth having an explicitly documented policy.

If we want Python's core source to remain 100% standards-compliant,
then int_and, int_or, etc. should be rewritten with ones' complement
and sign-and-magnitude machines in mind.  That's certainly feasible,
but it seems silly to do such a rewrite without a good reason.
Python-dev agreement that ones' complement machines should be
supported would, of course, be a good reason.

Mark
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to