On Wed, Aug 28, 2019 at 6:03 AM Andrew Barnert <abarn...@yahoo.com> wrote:
>
> On Tuesday, August 27, 2019, 11:12:51 AM PDT, Chris Angelico 
> <ros...@gmail.com> wrote:
> > If your conclusion here were "and that's why Python needs a proper
> > syntax for Decimal literals", then I would be inclined to agree with
> > you - a Decimal literal would be lossless (as it can entirely encode
> > whatever was in the source file), and you could then create the
> > float32 values from those.
>
> I think builtin Decimal literals are a non-starter. The type isn't even 
> builtin.
>

Not sure that's a total blocker, but in any case, I'm not arguing for
that - I'm just saying that everything up to that point in your
argument would be better served by a Decimal literal than by any
notion of "custom literals".

> But they're not. You didn't even attempt to answer the comparison with 
> complex that you quoted. The problem that `j` solves is not that there's no 
> way to create complex values losslessly out of floats, but that there's no 
> way to create them _readably_, in a way that's consistent with the way you 
> read and write them in every other context. Which is exactly the problem that 
> `f` solves. Adding a Decimal literal would not help that at all—letting me 
> write `f(1.23d)` instead of `f('1.23')` does not let me write `1.23f`.
>

TBH I don't quite understand the problem. Is it only an issue with
negative zero? If so, maybe you should say so, because in every other
way, building a complex out of a float added to an imaginary is
perfectly lossless.

> Also, I think you're the one who brought up performance earlier? `%timeit 
> np.float32('1.23')` is 671ns, while `%timeit np.float32(d)` with a 
> pre-constructed `Decimal(1.23)` is 2.56us on my laptop, so adding a Decimal 
> literal instead of custom literals actually encourages _slower_ code, not 
> faster.
>

No, I didn't say that. I have no idea why numpy would take longer to
work with a Decimal than a string, and that's the sort of thing that
could easily change from one version to another. But the main argument
here is about readability, not performance.

> Also, as the OP has pointed out repeatedly and nobody has yet answered, if I 
> want to write `f(1.23d)` or `f('1.23')`, I have to pollute the global 
> namespace with a function named `f` (a very commonly-used name); if I want to 
> write `1.23f`, I don't, since the converter gets stored in some 
> out-of-the-way place like `__user_literals_registry__['f']` rather than `f`. 
> That seems like a serious benefit to me.
>

Maybe. But far worse is that you have a very confusing situation that
this registered value could be different in different programs. In
contrast, f(1.23d) would have the same meaning everywhere: call a
function 'f' with one parameter, the Decimal value 1.23. Allowing
language syntax to vary between programs is a mess that needs a LOT
more justification than anything I've seen so far.

> > But you haven't made the case for generic string prefixes or any sort
> > of "arbitrary literal" that would let you import something that
> > registers something to make your float32 literals.
>
> Sure I did; you just cut off the rest of the email that had other cases.

Which said basically the same as the parts I quoted.

> And ignored most of what you quoted about the float32 case.

What did I ignore?

> And ignored the previous emails by both me and the OP that had other cases. 
> Or can you explain to me how a builtin Decimal literal could solve the 
> problem of Windows paths?

All the examples about Windows paths fall into one of two problematic boxes:

1) Proposals that allow an arbitrary prefix to redefine the entire
parser - basically impossible for anything sane

2) Proposals that do not allow the prefix to redefine the parser, and
are utterly useless, because the rest of the string still has to be
valid.

So no, you still haven't made a case for arbitrary literals.

> Here's a few more: Numeric types that can't be losslessly converted to and 
> from Decimal, like Fraction.

If you want to push for Fraction literals as well, then sure. But
that's still very very different from *arbitrary literal types*.

> Something more similar to complex (e.g., `quat = 1.0x + 0.0y + 0.1z + 1.0w`). 
> What would Decimal literals do for me there?
>

Quaternions are sufficiently niche that it should be possible to
represent them with multiplication.

quat = 1.0 + 0.0*i + 0.1*j + 1.0*k

With appropriate objects i, j, k, it should be possible to craft
something that implements quaternion arithmetic using this syntax.
Yes, it's not quite as easy as 4+3j is, but it's also far FAR rarer.
(And remember, even regular complex numbers are more advanced than a
lot of languages have syntactic support for.)

> I think your reluctance and the OP's excitement here both come from the same 
> source: Any feature that gives you a more convenient way to write and read 
> something is good, because it lets you write things in a way that's 
> consistent with your actual domain, and also bad, because it lets you write 
> things in a way that's not readable to people who aren't steeped in your 
> domain. Those are _always_ both true, so just arguing from first principles 
> is pointless. The question is whether, for this specific feature, there are 
> good uses where the benefit outweighs the cost. And I think there are.
>

That line of argument is valid for anything that is specifically
defined by the language. Creating a way to represent matrix
multiplication benefits people who do matrix multiplication. Those of
us who don't work with matrix multiplication on a daily basis,
however, can at least read some Python code and go "ah, a @ b means
matrix multiplication". The creation of custom literals means we can't
do that any more. For instance, you want this:

x = path"C:\"

but that means that it's equally possible for me to create this:

y = tree"  \"    "  \  "

Now, what does that mean? Can you even parse the rest of the script
without knowing what my 'tree' type does?

> In fact, if you're already convinced that we need Decimal literals, unless 
> you can come up with a more feasible way to add builtin Decimal literals to 
> Python, Decimal on its own seems like a sufficient use case for the feature.
>

There are valid use cases for Decimal literals and Fraction literals,
but not, IMO, for custom literals. Look at some of the worst abuses of
#define in C to get an idea of what syntax customization can do to
readability.

ChrisA
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/UGFSVY4XPHEH32OA7NCU5CMHS4RF5BBG/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to