(Long response.)

>  How clear is it to you what is going on [in]
>     3435 = +/ ^~ 3 4 3 5

Clear as day.  Immediately, and without any information or supporting
material, I read this line as "a number is equal to the sum of its digits
raised to their own powers". 

J has the clearest expression of a Munchausen equality of any programming
language I know (or knew).  I would argue that the J's even clearer than
the (inconsistent) standard mathematical notation, though that's certainly
clearer than the other programming languages.

>  *how* is  it clear to you?

Well, for one thing, the J description was incredibly concise.  Three
operations a total of 4 non-digit characters.  Not too much space to get
lost in.  J's brevity is important and should not be discounted.  However,
in this case, I think J has an even more important property:  it is
functional.

This benefit is hard to describe for someone entrenched in a Von Neumann
programming language.  The very absence of the scaffolding that teaches
the compiler *how* to calculate a Munchausen equality allows me to focus
entirely on the Munchausen equality itself.  J handles all the
housekeeping of actually doing the calculations.  But the kicker is I
don't need the calculations done!  I could understand (if perhaps not
verify) the assertion just as clearly if J were just a notation, and there
was never interpreter invented that actually calculated it [1].  

And this is not an academic question.  I hit this hurdle all the time when
I'm trying to read someone else's loopy code.  For example, when I'm
trying to implemented a J version of some RosettaCode task, I often find
the English of the description a bit ambiguous or underspecified.  Or I'll
understand the English but don't feel like inventing an algorithm to
implement it.

So, in cases like that, I'll try to read the (e.g.) Java or C
implementation.  But I don't want to read the code in detail.  What I want
is a high-level understanding of what the code does; I don't care *how* it
does it.  But I'm constantly stymied by these languages, because *what*
their programs do and *how* they do it are inextricable.  Even if I can
abstract away from the Java-ness or C-ness to a high-level pseud-code, I
am still frustrated: for example, I cannot understand what a loop does by
just looking at its heart, I must also understand its perimeter [2].

I'm sorry if that's not clear.  It's a hard feeling to express.  The short
story is I often resort to the mathematical description of the task
because it's a clearer, shorter expression of the goal than any of the
actual working programs.


>   isMunchausen =: = +/ o ^~ o digits

But here's the wrinkle.  I actually find the noun phrase easier to read and
understand than the verbal abstraction.  This may just be an instance of
the rule that a concrete example is often easier to understand than an
abstract (general) rule.

But I'm not convinced.  Because even with though I am fluent in tacit J, I
find I can grok:

>   isMunchX =: verb :'y = +/ ^~ digits y'

faster than the tacit equivalent.  Not by much (maybe a heartbeat), but
noticable.  But probably not for the reasons Ron Jefferies expects.  The
reason has to do with the fact that I find the composition conjunctions
intrusive (whether it's spelled  o  or  @:  or even  [:  is irrelevant).

If J's trains had an implicit composition rule, rather than an implicit
hook/fork rule, such that 

     = +/ ^~ digits  <==>  3 :'y = +/ ^~ digits y' : (4 :'x = +/ ^~ digits
y')

then I could grok the tacit faster (again, only slightly).  But the fork
rule allows us to express even mor complex things briefly, so I think that
heartbeat is a fine tradeoff (though I wouldn't mind a special set of
parens that interpreted the enclosed train as a pipe [3]).
  
In either case, the phrase is easy to understand because it contains so few
ideas, and because the details of those ideas are hidden from us,
thankfully.  But what about when the details aren't hidden?

For example, take:

>   digits =: 10&#.^:_1

To Ron Jefferies, this may indeed look like a meaningless jumble of ASCII. 
But for a Jer, there is no problem either lexically or syntacitcally. 
Even at a glance I can say it's 10 bonded to the inverse of  #.  .  The
first question for a Jer would be "what is  #.  , and what is its
inverse?".

Now, in this case,  10&#.^:_1  happens to be an idiom that anyone would
recognize.  It means the decimal digits of a number.  But if I didn't know
the word  #.  or its inverse meant, I'd have to look it up in the
Dictionary (how surprising!).  After that I'd just have to memorize its
meaning, and then I'd have a useful tool to subordinate detail.  Just like
in English.  Or, for that matter, any other programming language [4].

-Dan

[1]  Of course, whether I would be as fluent in J as I am today, if there
were never an interpreter invented, is a different, but relevant, question.


[2]  Compare:

     int result = 0;
     for (int i = 0; i < array.size; i++)  //  Even the phrase array.size
is optimistic!
     {
        result += array[i];
     }

vs
 
     +/ array                             NB.  Even handles integer
overflow!

If the first seems easy enough to you, then how about:

    int result = 0;
    for (int i = 0; i < array.size; i++)
    {
        result *= array[i]
    }

vs
    */  array                             NB.  Even handles integer
overflow!


Which of the two are easier to compare?  Did you spot the change in the
second pseudocode?  How about the 2 errors?  J didn't leave much room for
errors (some might claim the errors would be easier to spot with a better
naming scheme; my response is that the J didn't even need a naming scheme).

Also, J can (and does) optimize these functions so that the programmer
doesn't have to.  If there's a fancy new machine instruction for "add a
bunch of ints", I get the upgrade for free.  Everywhere I sum.  Of course,
if Roger doesn't bother to use the fancy new instruction, then I'm stuck
with the older, slower one.  But that's a tradeoff I'm willing to make so
that I don't even have to think about it.

Still not convinced?  Still think the examples above allow you to focus on
the heart of the loop and not its skin?  OK, how about:

     
   int[] result = int[math.ceiling( ((float) array.size) / 2.0)];
   for(int i = 0; i < array.size; i+=2)
   {
      result[i] = array[i]+array[i+1];
   }

vs
   
   2 +/\ array

... I only added 2 characters to the J function, and one is a parameter
(the pseudocode is fixed at a value of 2).  Bonus:  with the J, all the
corner cases are handled for me.  Including higher-order arrays.  

As you can see, the Von Neumann loops get messy and verbose quickly.

[3]  See http://www.jsoftware.com/pipermail/general/2005-August/023867.html
.  There's also an interesting thread about just making this the "default"
definition of a train, see
http://www.jsoftware.com/pipermail/general/2007-December/031329.html and
the responses.  Of course, we could do this today with some utility
operators.  See
http://www.jsoftware.com/jwiki/System/Interpreter/Requests#verbpipelines
(for a request for the interpreter to have a native implementation of one
such operator) and
http://www.jsoftware.com/pipermail/programming/2009-July/015565.html which
could also be used for the purpose.


[4] You were able to give this idiom a name so that the "top level"
function was kept clear and simple; I could read the word "digits",
understand what it implied, and not even care how it was implemented.  You
could've used  "."0@:":  for example.  This concept should be familiar to
any programmer.

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to