Raymond Hettinger added the comment:

Since the downstream calls to PyMem_Malloc and _PyLong_FromByteArray both 
accept size_t for their sizing, there isn't a problem there.

That said, I think the current limitation nicely protects us from harm.  If you 
were to run getrandbits(2**60) it would take a long time, eat all your memory, 
trigger swaps until your harddrive was full, and you wouldn't be able to break 
out of the tight loop with a keyboard interrupt.

Even with the current limit, the resultant int object is ridiculously big in a 
way that is awkward to manipulate after it is created (don't bother trying to 
print it, jsonify it, or doing any interesting math it).

Also, if a person wants a lot of bits, it is effortless to make repeated calls 
getrandbits() using the current API.  Doing so would likely improve their code 
and be a better design (consuming bits as generated rather than creating them 
all at once and extracting them later).

In short, just because we can do it, doesn't mean we should.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue27072>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to