On Thu, Jan 12, 2017 at 11:07 PM, Nick Coghlan <ncogh...@gmail.com> wrote: > As far as I know the main barrier to that approach is simply the lack > of folks with the time, interest, and expertise needed to implement, > review, and document it, rather than it being an objectionable > proposal at the language design level. (There would be some concerns > around potential confusion between when to use the default binary > literals and when to use the decimal literals, but those concerns > arise anyway - the discrepancies between binary and decimal arithmetic > are just one of those unfortunate facts of computing at this point)
Most of the time one of my students talks to me about decimal vs binary, they're thinking that a decimal literal (or converting the default non-integer literal to be decimal) is a panacea to the "0.1 + 0.2 != 0.3" problem. Perhaps the real solution is a written-up explanation of why binary floating point is actually a good thing, and not just a backward-compatibility requirement? ChrisA _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/