On Wednesday, 29 August 2018 at 21:14:59 UTC, Paul Backus wrote:
On Wednesday, 29 August 2018 at 19:56:31 UTC, Everlast wrote:
One of the things that makes a good language is it's internal syntactic consistency. This makes learning a language easier and also makes remembering it easier. Determinism is a very useful tool as is abstraction consistency. To say "Just except D the way it is" is only because of necessity since that is the way D is, not because it is correct. (There are a lot of incorrect things in the world such as me "learning" D... since I've been programming in D on and off for 10 years, I just never used a specific type for variadics since I've always use a variadic type parameter)

To justify that a poor design choice is necessary is precisely why the poor design choice exists in the first place. These are blemishes on the language not proper design choices. For example, it is necessary for me to pay taxes, but it does not mean that taxes are necessary.

The syntax *is* consistent. In `foo(int[] a...)`, `int[]` is the type of the parameter, and `a` is its name. This is consistent with how all other function parameters are declared. The only difference is in how arguments are bound to that parameter. That's what the `...` signifies: that a single parameter will accept multiple arguments. It's really quite straightforward and orthogonal.

No it is not! you have simply accepted it to be fact, which doesn't make it consistent.

If you take 100 non-programmers(say, mathematicians) and ask them what is the natural extension of allowing an arbitrary number of parameters knowing that A is a type and [] means array and ... means an arbitrary number of, they will NOT think A[]... makes sense.

... itself already includes the concept of an array(list or sequence) so having both [] and ... is either redundant or implies a different meaning.


To prove you are wrong in one fell swoop:

Suppose you you want to create a function that takes an arbitrary number of arrays of ints, e.g.,

foo([1,2,3], [4,2], [4,5,6,7])
foo([1,2,3])
foo([4,56,64], [4324,43,43], [4,2,2], [4,4,2,4,4,3,4], ...) // where ... has a specific mathematical notation(that all people familiar with mathematics understands as " continues in the same manner").

How is foo defined in the most direct type specified way?

Of course, you will give me the D way, because you assume the D way is the correct natural way... simply because you start with your conclusion that what D does is the correct and natural way.

The real natural way is:

foo(int[] a...)

but in D we have to do

foo(int[][] a...)

by your logic

Now, the natural way is

foo(int[] a...)

Why?

because bar(int a...) is the natural way to create a function bar that accepts an arbitrary(the ... tells us this) of ints)

Why? because bar(int a) would take one int, and bar(int a, int b) takes two, and bar(int a, int b, int c) takes three and therefor bar(int a...) takes an arbitrary number of ints.

Having to add [] only adds an extra set of symbols that means nothing if ... would be correctly interpreted.

If I'm wrong, then you have to prove why a syntax such as bar(int a...) cannot be interpreted singularly in the way I have specified.

Again, just because D does it this way doesn't mean it is the best way.

If you want to start from your conclusion that everything that D does is perfectly correct then there is little point in debating this...






Reply via email to