Stefan Krah <[email protected]> added the comment:
I agree that caching the hash would be useful for 3.2, but the
request comes at a unfortunate time: 3.2.3 is about to be released,
and there's no way that the change would go into it.
So let's focus on the C version in 3.3. These are the timings on a
64-bit machine with the C version in 3.3:
int: 0.537806510925293
CachingDecimal: 2.2549374103546143
Decimal: 1.8158345222473145
These are the timings with a hacked C version that caches the hash:
int: 0.5755119323730469
CachingDecimal: 2.3034861087799072
Decimal: 0.4364290237426758
The hash calculation time depends on the size of the coefficient
of the Decimal and the exponent. Note that the context is not
applied when using the Decimal constructor:
>>> Decimal(1e100)
Decimal('10000000000000000159028911097599180468360808563945281389781327557747838772170381060813469985856815104')
So the numbers you are using have an unusually high precision for
regular decimal floating point arithmetic.
If you want well defined limits, I suggest using either:
>>> Decimal('1e100')
Decimal('1E+100')
Or, if the input really must be a float:
>>> c = getcontext()
>>> c.create_decimal(1e100)
Decimal('1.000000000000000015902891110E+100')
In that latter case, of course the conversion is inexact and
rounded (but hashing will be faster).
----------
_______________________________________
Python tracker <[email protected]>
<http://bugs.python.org/issue14478>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com