Tim Peters <[email protected]> added the comment:
Vedran, as Mark said, the result is defined to have no trailing zeroes. In
general the module strives to return results "as if" infinite precision were
used internally, not to actually _use_ infinite precision internally ;-) Given
the same setup, e.g.,
>>> i * decimal.Decimal(0.5)
Decimal('2.0')
works fine.
This isn't purely academic. The `decimal` docs, at the end:
"""
Q. Is the CPython implementation fast for large numbers?
A. Yes. ...
However, to realize this performance gain, the context needs to be set for
unrounded calculations.
>>> c = getcontext()
>>> c.prec = MAX_PREC
>>> c.Emax = MAX_EMAX
>>> c.Emin = MIN_EMIN
"""
I suggested this approach to someone on Stackoverflow, who was trying to
compute and write out the result of a multi-hundred-million-digit integer
exponentiation. Which worked fine, and was enormously faster than using
CPython's bigints.
But then I noticed "trivial" calculations - like the one here - blowing up with
MemoryError too. Which made sense for, e.g., 1/7, but not for 1/2.
I haven't looked at the implementation. I assume it's trying in advance to
reserve space for a result with MAX_PREC digits.
It's not limited to division; e.g.,
>>> c.sqrt(decimal.Decimal(4))
...
MemoryError
is also surprising.
Perhaps the only thing to be done is to add words to the part of the docs
_recommending_ MAX_PREC, warning about some "unintended consequences" of doing
so.
----------
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue39576>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com