Raul Miller-4 wrote:
>
>
> I am not sure why you would think that supporting operations like
> 1 + 2 + 3 + 4
> increases the complexity of a notation.
>
> More specifically, I would argue that J's parser is not significantly
> more complex than LISP's.
>
The argument was that if the parser parses parenthesis,
it doesn't need to parse dyads, forks, trains, etc.
(Grammatically speaking, I could ask you exactly the same
question that I ask while reading the Iverson's Appendix:
"increases relative to what?" In both cases the answer is:
relative to a LISP-like parenthesizing notation.)
This is quite easy to prove, as long as your
notation includes parsing of parenthesis:
1+2+3+4 === (1+(2+(3+4))) === (+ 1(+ 2(+ 3 4)))
To parse the last LISP-like expression we don't need any rules for
parsing dyads as in the first expression. Since both parsers parse
tokens and parenthesis, the LISP-like parser has lower Kolmogorov
complexity, because it does not need any code to parse dyadic
operations.
Now let's compare ALGOL-, LISP-, and APL-like notations:
1) ALGOL-like: a*x^p + b*y^q
complex parser, programmer must know precedence rules,
but the notation is "just like in math".
2) LISP-like: (+(* a(^ x p))(* b(^ y q)))
very simple parser, unambiguous meaning, but many parenthesis.
3) APL-like: (a*x^p)+ b*y^q
simple parser; only the "right-to-left rule" to learn.
Expressions may be tricky to read/write for a beginner because
their algebra is different, like in: a*b+c versus c+a*b,
but this algebra has also powerful abbreviating features,
such as a*(b+c) from math becomes written shorter, as a*b+c.
(J further reduces the need for parenthesize through e.g
caped fork: F @:(f g h) === [: F f g h etc.)
--
View this message in context:
http://www.nabble.com/right-to-of-left----what-about-left-to-right--tp18155817s24193p18893815.html
Sent from the J Chat mailing list archive at Nabble.com.
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm