Kyle Lahnakoski writes:

 > Correct me if I am wrong: Yanghao would like an elegant way to build
 > graphs.  Simply using << to declare the connections in the graph [1] is
 > not an option because << is already needed for legitimate left-shift
 > operation.

This is his claim, yes.

 > The problem is not assignment;

In fact, it is a problem (see Caleb's post, and ISTR Yanghao saying
similar things).

If signals are graph components (links or nodes), they can't be
modeled as Python "variables" (names that can be bound to objects).  A
Python assignment (including augmented assignment) is (in principle)
always the substitution of one object for another in a binding for a
name.  (The old object is dropped on the floor and DECREF'd for GC.)

The trick in augmented assignment is that for objects mutable state
it's possible to intervene before the binding to arrange that the
object newly bound is the LHS object rather than a newly constructed
object.  (I'm not sure if the compiler can optimize this to do a true
in-place operation, but given duck-typing I guess it has to do
hasattr(lhsobj, '__iadd__') etc, invoke it on the lhsobj if yes,
otherwise default to invoking __add__ or __radd__ to get a new
object.)  For *regular* assignment, there is no such intervention.  So
you can't "assign to a signal" (whatever that means in Yanghao's HDL)
because it would change the graph.

For example, suppose you have two transistors connected emitter to
base:

    mysignal = Signal(T1.emitter, T2.base)

and you want to inject the signal 42:

    mysignal = 42

mysignal is no longer a signal, it is now an integer, and the Signal
is on the floor.  If variables were actual objects (a la symbols in
Lisp), in theory you could lookup an x.Signal.__assign__ dunder.  But
in Python they're not, and there is no assign dunder.  Of course this
is an optimization in a sense, but it's fundamental to the way Python
works and to its efficiency.

 > rather, Yanghao's HDL requires more operators than Python has in
 > total

That's his claim, yes.  Thing is, he keeps saying that "signals are
just integers", and we have strictly more than enough operator symbols
to model the conventional set of integer operations.

 > (or at least the remaining operators look terrible).

If it's "just" ugly, "get used to it." ;-)  I do have sympathy for
Yanghao's claim (which I suppose is what you mean by that) that

    x @= (y @ z)

would be somewhat confusing, where the "@" indicates matrix
multiplication of signals (eh?) and the "@=" indicates injection of
the signal on the RHS to the signal object x.  It's not obvious to me
that this would occur frequently, and not obvious to me that there
would be confusion since only in special cases (square matrices) does
y @= z even make sense (except maybe as a space optimization if y's
rows are fewer than y's columns).  Naively, I expect that the more
common "signal matrix" operations would be element-wise (expressing
parallel transmission "on the tick").

Note that signal matrices will almost certainly be a completely
different type from signals, so as far as the compiler is concerned
there's no conflict between "@=" for signal injection and "@=" for
signal matrix multiplication.  The "confusion" here is entirely a
readability issue.  (That's not to deprecate it, just to separate it
from the "not enough operators" issue.)

But I don't use HDLs, so maybe frequent occurance would be obvious if
I saw typical use cases.

 > Here is a half baked idea:

Very much not new.  Be warned: user-defined operators come up every
couple of years, and never get much traction.  There's a basic
philosophical problem: a user-defined operator could be defined
anywhere and be used almost anywhere else.  This means that draconian
discipline (such as restriction to one compact and intuitive DSL!) is
required or readability will be dramatically adversely impacted.  My
experience with Lisp, the C preprocessor, and even to some extent with
Python, suggests this is a real danger.

Granted, it's not one presented by DSLs themselves because (almost by
definition) they are compact and intuitive (some better than others,
of course, but unless your goal is to !!!! brains nobody sets out to
define Brainf!ck :-).

 > "User-defined operators" are a limited set of operators, with no
 > preconceived definition, and can be defined by writing the appropriate
 > dunder method.  User-defined operators match /[!@$%^-+=&*:<>?/|]{2,4}/
 > and necessarily exclude all the existing python-defined operator
 > combinations.  Maybe 50K operators is easier to add to Python than a few
 > pre-defined operators?

Hard to say.  First, you have the problems of precedence and
associativity.  The easy thing to do is to insert all user-defined
operators at a fixed place in the precedence hierarchy and make them
non-associative (effectively requiring parenthesis whenever they occur
in a larger expression).  This is unlikely to make users very happy,
especially users of DSLs who (almost by definition ;-) are quite picky
about syntax.

It wouldn't be hard to notate user-specified precedence and
associativity.  Instead of

 > To add a user defined operator to a class, you add it as a method,
 > exactly the name of the operator:
 > 
 > > setattr(A, "<==", A.assign)

something like

    setattr(A, "<==", (A.assign, 'right associative', '+='))

might be usable.  But it should raise a SyntaxAlreadyDefinedError :-)
if conflicting precedence or associativity were to be defined else
where.  This is not solved by setting precedence and associativity at
the module level, of course -- the user wants the expression to "look
clean" for all types that use that operator symbol, but duck-typing
means the compiler can't be sure which type is meant, and so can't
parse unless all types agree on those attributes of the operator
symbol.  Note also that unless the class says otherwise, the A.<==
attribute is mutable.  I guess that could be part of your @operator
decorater, but I can imagine users who want the operations for
initialization and assignment to have different semantics but the same
operator symbol to do both (as is traditional).  So the compiler can't
rely on it, which means interpreting the expression at runtime.  In
general this might mean interpreting *all* expressions at runtime.  I
don't think that's acceptable.

You also have the problem that you now need all types that use that
operator symbol to agree on the name and signature of the dunder, or
the compiler is stuck interpreting the subexpression at runtime,
slowing the program down.  (Normally it inlines the call to the dunder
at compile time, saving a lookup.)

All of these problems are solvable, of course, in particular by
draconian restrictions on the properties of operators that are
user-definable.  Those restrictions might nonetheless be acceptable to
most DSL authors.  Or useful relaxation might be achieved at the
expense of only a bit more complexity.  But we really need a PEP and a
proof-of-concept implementation to see whether the costs (in
complexity and maintainability going forward) would be justified by
the benefits to what is IMO unlikely to be a very large or growing
audience.  Neither PEP nor PoC has been forthcoming in the past, that
I remember anyway.

Steve
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/E6LOXM3DC6DVDZ5IY5ROAUSKW3UK3SGJ/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to