Tim Peters <t...@python.org> added the comment:

A practical caution about this in comb_with_side_limits.py:

    Pmax = 25           # 25         41
    Fmax = Pmax

It's true then that 25! == F[25] << S[25], but that's so in Python. The result 
has 84 bits, so 64-bit C arithmetic isn't enough.

That's apparently why mathmodule.c's static SmallFactorials[] table ends at 20 
(the largest n such that n! fits in 64 bits).

(Well, actually, it ends at 12 on Windows, where sizeof(long) is 4, even on 
"modern" 64-bit boxes)

I would, of course, use uint64_t for these things - "long" and "long long" are 
nuisances, and not even attractive ones ;-) While support (both software and 
HW) for 64-bit ints progressed steadily in the 32-bit era, the same does not 
appear to be true of 128-bit ints in the 64-bit era. Looks to me like 64-bit 
ints will become as much a universal HW truth as, say, 2's-complement, and byte 
addresses, have become.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue37295>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to