On Monday, 4 September 2017 at 03:08:50 UTC, EntangledQuanta
wrote:
On Monday, 4 September 2017 at 01:50:48 UTC, Moritz Maxeiner
wrote:
On Sunday, 3 September 2017 at 23:25:47 UTC, EntangledQuanta
wrote:
On Sunday, 3 September 2017 at 11:48:38 UTC, Moritz Maxeiner
wrote:
On Sunday, 3 September 2017 at 04:18:03 UTC, EntangledQuanta
wrote:
On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz
Maxeiner wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC,
EntangledQuanta wrote:
[...]
The contexts being independent of each other doesn't
change that we would still be overloading the same keyword
with three vastly different meanings. Two is already bad
enough imho (and if I had a good idea with what to replace
the "in" for AA's I'd propose removing that meaning).
Why? Don't you realize that the contexts matters and [...]
Because instead of seeing the keyword and knowing its one
meaning you also have to consider the context it appears in.
That is intrinsically more work (though the difference may
be very small) and thus harder.
...
Yes, In an absolute sense, it will take more time to have to
parse the context. But that sounds like a case of
"pre-optimization".
I don't agree, because once something is in the language
syntax, removing it is a long deprecation process (years), so
these things have to be considered well beforehand.
That's true. But I don't see how it matters to much in the
current argument. Remember, I'm not advocating using 'in' ;)
[...]
It matters, because that makes it not be _early_ optimization.
If we are worried about saving time then what about the
tooling? compiler speed? IDE startup time? etc?
All these take time too and optimizing one single aspect, as
you know, won't necessarily save much time.
Their speed generally does not affect the time one has to
spend to understand a piece of code.
Yes, but you are picking and choosing. [...]
I'm not (in this case), as the picking is implied by discussing
PL syntax.
So, in this case I have to go with the practical of saying
that it may be theoretically slower, but it is such an
insignificant cost that it is an over optimization. I think
you would agree, at least in this case.
Which is why I stated I'm opposing overloading `in` here as a
matter of principle, because even small costs sum up in the
long run if we get into the habit of just overloading.
I know, You just haven't convinced me enough to change my
opinion that it really matters at the end of the day. It's
going to be hard to convince me since I really don't feel as
strongly as you do about it. That might seem like a
contradiction, but
I'm not trying to convince you of anything.
Again, the exact syntax is not import to me. If you really
think it matters that much to you and it does(you are not
tricking yourself), then use a different keyword.
My proposal remains to not use a keyword and just upgrade
existing template specialization.
[...]
You just really haven't stated that principle in any clear way
for me to understand what you mean until now. i.e., Stating
something like "... of a matter of principle" without stating
which principle is ambiguous. Because some principles are not
real. Some base their principles on fictitious things, some on
abstract ideals, etc. Basing something on a principle that is
firmly established is meaningful.
I've stated the principle several times in varied forms of
"syntax changes need to be worth the cost".
I have a logical argument against your absolute restriction
though... in that it causes one to have to use more
symbols. I would imagine you are against stuff like using
"in1", "in2", etc because they visibly are to close to each
other.
It's not an absolute restriction, it's an absolute position
from which I argue against including such overloading on
principle.
If it can be overcome by demonstrating that it can't
sensibly be done without more overloading and that it adds
enough value to be worth the increases overloading, I'd be
fine with inclusion.
[...]
To simplify it down: Do you have the sample problems with all
the ambiguities that already exist in almost all programming
languages that everyone is ok with on a practical level on a
daily basis?
Again, you seem to mix ambiguity and context sensitivity.
W.r.t. the latter: I have a problem with those occurences
where I don't think the costs I associate with it are
outweighed by its benefits (e.g. with the `in` keyword
overloaded meaning for AA's).
Not mixing, I exclude real ambiguities because have no real
meaning. I thought I mentioned something about that way back
when, but who knows... Although, I'd be curious if any
programming languages existed who's grammar was ambiguous and
actually could be realized?
Sure, see the dangling else problem I mentioned. It's just that
people basically all agree on one of the choices and all stick
with it (despite the grammar being formally ambiguous).
[...]
Why do you think that? Less than ten people have
participated in this thread so far.
I am not talking about just this thread, I am talking about
in all threads and all things in which humans attempt to
determine the use of something. [...]
Fair enough, though personally I'd need to see empirical
proof of those general claims about human behaviour before I
could share that position.
Lol, you should have plenty of proof. Just look around. [...]
Anectodes/generalizing from personal experiences do not equate
proof (which is why they're usually accompanied by things like
"in my experience").
There is no such thing as proof in life. [...]
There is a significant difference between generalizing from one
person's point of view and following the scientific method in
order to reach reproducible results (even in soft sciences).
I'd like to see such a feature implemented in D one day, but
I doubt it will for whatever reasons. Luckily D is powerful
enough to still get at a solid solution.. unlike some
languages, and I think that is what most of us here realize
about D and why we even bother with it.
Well, so far the (singular) reason is that nobody that wants
it in the language has invested the time to write a DIP :p
Yep.
I guess the problem with the D community is that there are no
real "champions" of it.
There are for the specific points that interest them. Walter
currently pushes escape analysis (see his DConf2017 talk about
how DIP1000 improves the situation there by a lot), Andrei pushed
std.experimental.allocator, which is still being improved to
reach maturity.
We also have quite a few people who have championed DIPs in the
last months (one I especially care about is DIP1009 btw).
Many of the contributors here do not make money off of D in any
significant way and hence do it more as a hobby. So the
practical side of things prevent D from really accomplishing
great heights(at this point).
I actually disagree on the conclusion. From my experience, things
primarily done for money (especially in the software business)
are pretty much always done to the worst possible quality you can
get away with.
What we know for sure, if D does progress at a specific "rate",
it will be overtaken by other languages and eventually die out.
I don't see this happening anytime soon, as all other native
system PLs are so far behind D in terms of readability and
maintainability that it's not even funny anymore.
Regardless, should that unlikely scenario happen, that's okay,
too, because in order for them to actually overtake D, they'll
have to incorporate the things from D I like (otherwise they
haven't actually overtaking it in terms of PL design).
This is a fact, as it will happen(everything dies that lives,
another "over generalization" born in circumstantial evidence
but that everyone should be able to agree on...). D has to keep
up with the Kardashians if it want's to be cool...
unfortunately.
I can't speak for anyone else, but I'm not using D because I
think D wants to be cool (I don't think it does), I use it
because more often than not it's the best tool available and I
believe the people who designed it actually cared about its
quality.