Matt del Valle writes: > Fully agreed on the sentiment that we shouldn't treat compile-time > literals differently from runtime operations.
But as you just pointed out, we do. Literals are evaluated at compile time, operations at runtime. "This" and f"This" generate very different code! There's nothing in the concept of literal that prevents us from treating the sequence of tokens 1 / 3 (with space optional) as a *single* literal, and mapping that to Fraction(1, 3). The question is purely UI. 1 / 3 "looks like" an expression involving two int objects and a division operator. Do we force treatment as a literal (allowing it to be a Fraction), or do we treat it as an expression? This only matters because the type is different. It doesn't bother me any more than it bothers Martin, but since it bothers a lot of Pythonistas that kills it for me. I admit to being surprised at the vehemence of the pushback, especially from people who clearly haven't understood the proposal. (Guido's response is another matter, as he thought very carefully about this decades ago.) > It has no precedent in Python and adds a significant mental burden > to keep track of. Only if you want it to. To me it's much like multiple value returns in Common Lisp: if you don't use a special multiple values expression to capture the extra values as a list, all you'll see is the principal value. The analogy is that if you don't do something to capture the Fraction-ness of Martin's ratiofloats, it will (more or less) quickly disappear from the rest of the computation. I agree that the "more or less" part is problematic in Python, and the ratiofloat object itself could persist indefinitely, which could raise issues at any time. So AFAICS you can basically treat ratiofloats as infinitely precise floats, which lose their precision as soon as they come in contact with finite-precision floats. Since most people think of floats as "approximations" (which is itself problematic!), I don't see that it adds much cognitive burden -- unless you need it, as SymPy users do. > That said, I also really like the idea of better Python support for > symbolic and decimal math. > > How about this as a compromise: > > `from __feature__ import decimal_math, fraction_math` Followed by "import numpy", what should happen? Should numpy respect those? Should floats received from numpy be converted? Which takes precedence if both decimal_math and fraction_math are imported? I don't think this can work very well. Martin's approach works at all *because* the ratiofloats are ephemeral, while computations involving floats are everywhere and induce "float propagation". > Downsides: > - the complexity of adding this new '__feature__' interpreter directive, > although it *should* be possibly to reuse the existing __future__ machinery > for it This is basically a pragma. "from __future__" was acceptable because it was intended to be temporary (with a few exceptional Easter eggs). But in general pragmas were considered un-Pythonic. Aside from __future__ imports, we have PEP 263 coding "cookies", and I think that's it. I'm pretty sure these features are not important enough to overcome that tradition. > I don't know. It was just an idea off the top of my head. On second > thought, maybe it's needlessly contrived. It's an idea. I think it highly unlikely to pass *in Python*, but that doesn't make it necessarily a bad idea. There are other languages, other ways of thinking about these issues. Reminding ourselves of that occasionally is good! Steve _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/CE65QGP4QEBMNAIVCKKVWRFFPTABLCBN/ Code of Conduct: http://python.org/psf/codeofconduct/