On Aug 9, 2019, at 19:09, Greg Ewing <greg.ew...@canterbury.ac.nz> wrote:
> 
> Andrew Barnert wrote:
>> Except that it doesn’t allow that. Using Decimal doesn’t preserve the
>> difference between 1.0000E+3 and 1000.0, or between +12 and 12.
> 
> That's true. But it does preserve everything that's important for
> interpreting it as a numerical value without losing any precision,
> which I think is enough of an improvement to recommend having it
> as an option.

I don’t see why 00.12 vs. 0.12 is a problem (but ok as a problem since it only 
affects very weird code), while 0.12 and 0.012E1 are not a problem at all. If 
there’s any extra information in the extra 0 in the first case, surely there’s 
the same extra information in the second: in both cases, it’s a difference 
between 1 leading zero and 2. I don’t think this is very important, because 
00.12 is not something people usually expect to have any extra meaning (and not 
something that any of the relevant specs give any meaning to), but you did 
bring it up.

More importantly, the OP isn’t asking for preserving mathematical precision. In 
fact, float already gives him 100% mathematical precision for his example. The 
two strings that he’s complaining about are different string representations of 
the same float value. It’s not the value he’s complaining about, but the fact 
that Python won’t give him the same string representation for that float that 
his C++ library does, and therefore dumps(loads(…)) isn’t perfectly 
round-tripping his JSON.

This will be exactly the same case if the text is +12 or 1.0000E+3, and 
switching from float to Decimal won’t do anything to fix that.

If every library in the world used the same algorithm for float representations 
as Python, identical in every way except that some of them round half away from 
zero instead of rounding to even, then using Decimal would solve that problem. 
(But so would making it easy to specify a different rounding mode…) But that’s 
not the case. (And, even if it were, it still wouldn’t solve any of the other 
problems with JSON not being canonicalizable.)

There are good uses for use_decimal, like when you actually want to pass around 
numbers with more precision than float and know both your producer and consumer 
can handle them. That’s presumably why simplejson offers it.

But this thread, nobody is suggesting any good uses. They’re either suggesting 
we should have it so people can be fooled into believing they can get perfect 
round-tripping or JSON, or so we can solve a mathematical precision problem 
that doesn’t actually exist. If someone had come up with a good way to port 
use_decimal into the stdlib without needing json to import decimal I would have 
said it was a great idea—but after this thread, I’m not so sure. It everyone 
who wants it, or wants other people to have it, is wrong about what it would 
do, that just screams attractive nuisance rather than useful feature.
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/H5LJDAGCK6ST6C5OYSIMP3PGV7V7SIPX/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to