>
> > I guess '>=' also looks "confusable", but it's far less common in
> signatures, and the meaning is further away.
>
> It's no less valid than your other examples, nor less common (why
> would you have "==" in a function signature, for instance?).
>

I guess I probably use `==` more often in function calls and signatures, on
reflection.  In call, I use `==` quite often to pass some boolean switch
value, and `>=` much less often.  Obviously, I am aware that `>=` also
produces a boolean result, and YMMV on how often comparing for equality and
inequality expresses the flag you want.

In signature, I'd really only use it, I reckon, as a "default default." E.g.

    def frobnicate(data, verbose=os.environ.get('LEVEL')==loglevel.DEBUG):
...

This supposes I have an environmental setting for verbosity that I usually
want to use, but might override that on a particular call.


> > I think the cognitive complexity of a line with sigils is somewhere
> around quadratic or cubic on the number of distinct sigils. But when
> several look similar, it quickly tends toward the higher end. And when
> several have related meanings, it's harder still to read.
>
> It shouldn't need to be. Once you know how expressions are built up, it
> should give linear complexity.
>

I'm not talking about the big-O running time of some particular engine that
parses a BNF grammar here.  I'm talking about actual human brains, which
work differently.

I don't have data for my quadratic and cubic guesses.  Just 40 years of my
own programming experience, and about the same amount of time watching
other programmers.

It would be relatively easy to measure if one wanted to.  But it's a
cognitive psychology experiment.  You need to get a bunch of people in
rooms, and show them lots of code lines.  Then measure error rates and
response times in their answers.  That sort of thing.  The protocol for
this experiment would need to be specified more carefully, of course.  But
it *is* the kind of thing that can be measured in human beings.

So my (very strong) belief is that a human being parsing a line with 5
sigils in it will require MUCH MORE than 25% more effort than parsing a
line with 4 sigils in it.  As in, going from 4 to 5 distinct sigils in the
same line roughly DOUBLES cognitive load.  Here the distinctness is
important; it's not at all hard to read:

a + b + c + d + e + f + g + h + i


And in the weeds, the particular sigils involved (and even the font they're
rendered in) will make a difference too. As well it matters the "semantic
proximity" of the various operators. And other factors too I'm sure.

Did the introduction of the @ (matrix multiplication) operator to
> Python increase the language's complexity multiplicatively, or
> additively? Be honest now: are your programs more confusing to read
> because @ could have been *, because @ could have been +, because @
> could have been ==, etc etc etc, or is it only that @ is one new
> operator, additively with the other operators?
>

I'm not sure how much you know about the background of this in the NumPy
world.  While other libraries have also now used that operator, NumPy was
the driving force.

In the old days, if I wanted to do a matrix multiply, I would either do:

A_matrix = np.matrix(A)
B_matrix = np.matrix(B)

result = A_matrix * B_matrix

Or alternately:

result = np.dot(A, B)


Neither of those approaches are terrible, but in more complex expressions
where the dot product is only part of the expression, indeed `A @ B` reads
better.

And yes, expressions on NumPy arrays will often use a number of those
arithmetic operators I learned in grade school as well as `@`.  But
generally, the mathematics expressed in NumPy code is irreducible
complexity.  It's not necessarily easy to parse visually, but it *IS* the
underlying mathematics.


> > When I write an expression like 'a - b * c / d**e + f' that also has a
> bunch of symbols. But they are symbols that:
> >
> > - look strongly distinct
> > - have meanings familiar from childhood
> > - have strongly different meanings (albeit all related to arithmetic)
>
> The double asterisk wasn't one that I used in my childhood, yet in
> programming, I simply learned it and started using it. What happens is
> that known concepts are made use of to teach others.


I  didn't learn the double asterisk in school either.  That I had to learn
in programming languages.  I actually prefer those programming languages
that use `^` for exponentiation (in that one aspect, not overall more than
Python), because it's more reminiscent of superscript.

Is that simply because you already are familiar with those operators,
> or is there something inherently different about them? Would it really
> be any different?
>

It's a mixture of familiarity and actual visual distinctness.  `/` and `+`
really do just *look different*.  In contrast `:=` and `=` just really look
similar.

Then don't use it in a signature. That's fine. Personally, I've never
> used the "def f(x=_sentinel:=object())" trick, because it has very
> little value


I agree with you here. I am pretty sure I've never used it either.  But
most of the code I read isn't code I wrote myself.

In the case of the walrus, I'm not even saying that I think it should have
been prohibited in that context.  Just discouraged in style guides.  While
I understand the handful of cases where walrus-in-signature has a certain
utility, I would be happy enough to forgo those.  But my concern has more
to do with not limiting expressions/symbols/keywords to special one-off
contexts.  That's a relative thing, obviously, for example `@deco` can
really only happen in one specific place, and I like decorators quite a
lot.  But where possible, symbols or words that can occur in expressions
should be available to all kinds of program contexts, and have *pretty
much* the same meaning in all of them.  And yes, you can find other
exceptions to this principle in Python.

This actually circles back to why I would greatly prefer `def
myfunc(a=later some_expression())` as a way to express late binding of a
default argument.  Even though you don't like a more generalized deferred
computation, and a version of PEP 671 that used a soft keyword would not
automatically create such broader use, in my mind the option of later more
general use is left open by that approach.


> - it makes the function header carry information that
> actually isn't part of the function signature (that the object is also
> in the surrounding context as "_sentinel" - the function's caller
> can't use that information), and doesn't have any real advantages over
> just putting it a line above.
>

I do think some of this comes down to something I find somewhat mythical.
99%+ of the time that I want to use a sentinel, `None` is a great one.  Yes
I understand that a different one is required occasionally.  But basically,
`arg=None` means "late binding" in almost all cases.  So that information
is ALREADY in the header of almost all the functions I deal with.

>
> --
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/QTUCHNX4CR453NVBHAHPJGNRQ2VVCSK6/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to