grauzone wrote:
Andrei Alexandrescu wrote:
Any more thoughts, please let them known. Again, this is the ideal
time to contribute. But "meh, it's a hack" is difficult to discuss.
Well that's just like as if Bjarne Stroustrup would ask you: "What would
you have done in my place? This looked like the right thing to do at
this time!". And now we're working on a language that's supposed to
replace C++.
I'm not sure what you mean here.
Partially I don't really know how opBinary is supposed to solve most
operator overloading problems (listed in DIP7). It just looks like a
stupid dispatch mechanism. It could be implemented by using CTFE and
mixins without compiler changes: just let a CTFE function generate a
dispatcher function for each opSomething to opBinary. Of course, if you
think operators are something that's forwarded to something else, it'd
be nicer if dmd would be doing this, because code gets shorter. So
opSomething gets ditched in favor of opBinary. But actually, the
functionality of opBinary can be provided as a template mixin or a CTFE
function in Phobos. At least then the user has a choice what to use.
Things could indeed be generated with a CTFE mixin, but first we'd need
a hecatomb of names to be added: all floating-point comparison operators
and all index-assign operators. With the proposed approach there is no
more need to add all those names and have the users consult tables to
know how they are named.
(About the issue that you need to remember names for operator symbols:
you know C++ has a fabulous idea how to get around this...)
I think it's not as flexible a solution as passing a compile-time string
because there is no way to actually use that token.
Now what about unary operators?
opUnary.
Or very specific stuff like opApply?
opApply stays as it is.
What's with opSomethingAssign (or "expr1[expr2] @= expr3" in general)?
opBinary doesn't seem to solve any of those.
opBinary does solve opIndex* morass because it only adds one function
per category, not one function per operator. For example:
struct T {
// op can be "=", "+=", "-=" etc.
E opAssign(string op)(E rhs) { ... }
// op can be "=", "+=", "-=" etc.
E opIndexAssign(string op)(size_t i, E rhs) { ... }
}
This was one motivation: instead of defining a lot of small functions
that have each a specific name, define one function for each category of
operations and encode the operator name as its own token. I don't
understand exactly what the problem is with that.
Also, all this just goes
down to a generic "opOperator(char[] expression, T...)(T args)", where
expression is actually an expression involved with the object. It feels
like this leads to nothing.
We need something to work with. "looks like a stupid mechanism" and
"feels" are not things that foster further dialog.
And opBinary is a just a quite arbitrary
stop on that way to nothing. Just a hack to make code shorter for some
use cases.
opBinary is a binary operator, hardly something someone would pull out
of a hat. I'm not sure what you mean to say here.
One way of solving this issue about "extended operator overloading"
would be to introduce proper AST macros. An AST macro could match on a
leaf of an expression and replace it by custom code, and use this
mechanism to deal with stuff like "expr1[expr2] @= expr3" (and I don't
see how opBinary would solve this... encode the expression as a string?
fallback to naive code if opBinary fails to match?). At least that's
what I thought AST macros would be capable to do.
See above on how the proposed approach addresses RMW operations on indexes.
Anyway, AST macros got ditched in favor of const/immutable, so that's
not an option.
That I agree with.
Andrei