The current PyLong implementation represents arbitrary precision integers
in units of 15 or 30 bits. I presume the purpose is to avoid overflow in
addition , subtraction and multiplication. But compilers these days offer
intrinsics that allow one to access the overflow flag, and to obtain the
result of 64 bit multiplication as a 128 bit number. Or at least on x86-64,
which is the dominant platform.  Any reason why it is not done? If it is
only because no one bothers, I may be able to do it.
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to