Dan Bron <[EMAIL PROTECTED]> wrote:
> I don't think  [:  is considered a design mistake.  At least, I don't see
> it that way.  Certainly  [: g h  is a grammatical anomaly, but it's limited,
> and covered by a single extra sentence in the DoJ (and a couple of diagrams). 
>  For
> that small cost, it has great utility.

I think it's inelegant, because:
1) [: behaves semantically unlike any other J verb
  (in most cases [:y and x[:y return a domain error (formerly valence error),
   but it doesn't do so as 'f' in 'f g h')
2) It is impossible to write a user-written version of [:
   In particular, if you write:
      cap =: [:
      fg =: cap f g
   You get the 'normal' behavior of [: (domain error)

This problem would disappear if, say,
1) J added a new data type (say, 'void'),
2) [: was a constant verb that always returned void
3) Dyads with x=void are treated as monads
4) Verbs with y=void generally return domain errors
   (except perhaps [y x[y ]y x]y <y x#y x$y etc.)


> Here are some I can think of:

>    *  The rank of  @  .  Because  @  is shorter than  @:

Yes, it would have made more sense if these were reversed.
This would be particularly useful in the case of &
   f & g        f composed with g (infinite rank)
   f &: g       f composed with g (rank of g)
   m & g        m curried on left of g (infinite rank)
   m &: g       m curried on left of g (rank of g; 1&+ would have rank 0)
etc.

>   *  Contrary to the above, I wish the adverse conjunction  ::  
>      produced a verb with the rank of the nominal (i.e. not 
>      infinite rank).  That is, I want (f :: g b. -: f b.) 0 to hold.

Many of these could have been done with the operator calculus
deprecated after J 4:
   @: =: @ " ].
   adverse =: :: " [.

> I do not agree with  30!:  or Lisp syntax, etc.

Actually, most foreigns currently seem to work like m!:n -> (n+120*m)!:
but such a mechanism restricts future expansion; some foreigns (currently 11!:n)
require much larger range for n, which makes this model too restrictive.

However, it would be nice if the constant mechanism were expanded from
a fixed list of 20 constant verbs to allow any constant to be implicitly
converted into a constant verb.

The language syntax and semantics are compatible with allowing things like 42: 
1x1: etc.
One slight change would need to be made into the rhematic rules to allow 'a':
but this would not break any code, since 'a' : v/n is always a domain error.

>       With the hindsight from 17 years of usage, I would
>       define 2-verb trains differently, viz.
>
>           (f g) y   <->   f    g y
>         x (f g) y   <->   f (x g y)
>
>       And the ranks would be _ _ _

This just removes the need for [: (which I could very well live with, see 
above).
However, I think that the semantics of the APL bond operator would be better:
            (f g) y   <->     f g y
          x (f g) y   <->   x f g y
     
>   *  Roger also said he might reconsider indirect assignment:
>
>      http://www.jsoftware.com/pipermail/general/2006-May/027041.html
> 
>      I personally would not support removing it.

There are certain things that just CANNOT be done without indirect assignment,
so for this reason alone it should be kept in at least some form.
(One can emulate it via ". : but just TRY to assign local operators that way)

>   *  I similarly disagree with his sentiment that 
>      complementary indexing ( (<<<x){y  )  be removed:  
>
>      http://www.jsoftware.com/pipermail/general/2002-January/010896.html

One reason why I think this needs to be there is to allow selection of
the entirety of an axis by using a: to select it; this is analagous to using
an empty index within APL axis bracket notation.



> >  What if we had a hook conjunction?  Would that not make hooks
> >  clearer and easier to recognize than currently?
>
> That possibility is discussed in some of the links I provided.  I personally 
> don't
> find:
>
>     pos =. #~ h. (0 <])
>
> prettier than the current:
>
>     pos =. #~ 0 <]
>
> The only situation I can think of that  h.  would be superior would be in 
> long trains
> where the rightmost is a hook:
>
>     f0 f1 (f2 f3)
>
> Right now, as you can see, I have to enclose my hook in parens, which makes 
> it longer
> and more jarring.  With  h.  I could avoid those
>
>     f0 f1 f2 h. f3
>
> Subjectively, I find this less jarring.  But it's certainly no shorter.  And
> it has absolutely no advantages over its fork equivalent:
>
>     f0 f1 ] f2 f3
>
> which is the least jarring and shortest of our options.  

I think that this only matters if you define 'superior' in terms of
'obsessive need to eliminate parentheses'
As it is, the irregular syntax of hooks (unlike capped forks) means they
can only be used unparenthesized at the left end of a train; otherwise they
must be parenthesized anywhere else.

I have also long thought there ought to be a "left hook" mechanism that
is symmetrical to the current "right hook":
   h. =: 2 : '[ u [EMAIL PROTECTED]' NB. Current hook (right hook)
   h: =: 2 : '[EMAIL PROTECTED] v ]' NB. Left hook
I often use these myself (as explicit conjunctions) and find that they
can often remove parentheses as well.

> If we were to eliminate
> hook, I'd say forgo the conjunction and just force people to write the fork 
> ] g h  .

Actually, this only works for the monad.
If you want it to be dyadic or ambivalent, you have to write
  [ g [EMAIL PROTECTED]

> Which goes to the reason I often prefer hooks:  I personally don't like seeing
> too many [  or  ]  .  One of the reasons I like tacit programming is that it 
> doesn't
> even name its arguments; it is a pure definition.  With  avg=:+/%#  we simply
> define  avg  to be the average.  We don't say what it is the "the average
> OF".   This is a hard concept to express (do you know any good articles on 
> declarative vs imperative languages, or point free coding?)

Do you find implicit [ and ] within the semantics of hooks less objectionable 
than
explicit ones? If so, why, since the behavior is identical?
In any case where more than one parameter comes in, and there are multiple verbs
that take multiple parameters, there MUST be a mechanism to specify

> Anyway, a few [] is fine; because then they're just "identity functions",
> equal in standing to  -  or  #  .  But once you start peppering your verbs 
> with 
> them, they start becoming stand ins for argument names; essentially 
> replacements
> for  x y  .  I see no reason to "go tacit" if you're going to write
> your verbs this way.

Well, for one, tacit should be more efficient, since it doesn't end up creating
an entire new stack frame and local name space the way explicit definition does.

> I do concede that the contortions required to avoid  []  completely make 
> tacit code
> hard to follow, read, and maintain.  Lots of  ~  switching things around is 
> confusing,
> and lots of  ()  interrupt the flow (hey, that's why we have  [:  ).  But 
> there
> is a middle ground, and hooks help you get there.

I think that moderation is the key. Any extreme positions (such as "no gotos",
"no parentheses", "no hooks", "no conjunctions", "no passives", "no explicit", 
etc.)
produce code that is contorted away from the optimal to fit some artificial
(and probably sub-optimal) aesthetic.

> Anyway, this entire discussion is predicated upon having a better definition 
> for
> the 2-train (verb bident).  If you don't like hook, don't use it.  But, 
> unless you have a better use for its syntax, let me keep it.  The only 
> current contender
> is  f@:g  .  This is of limited utility;  (f0 (f1 (f2 (f3)))) is no better 
> than 
> f1@:(f2)@:(f3)@:(f4)  .  In fact, I like it less.

Yes. Things with too many parentheses (end (up (looking (too (much (like 
(Lisp)))))).

> What if trains (of any length) merely composed their verbs?  What if  (f0 
> f1)y <==>
> f0 f1 y  *and*  (f0 f1 f2)y  <==>  f0 f1 f2 y  ?  And similarly for
> the dyads?

While this makes certain notations easier (f0@:f1@:f2 y), it makes
other notations (fork) virtually impossible to do otherwise.

Actually, you COULD define an new adverb, say, f:, and have f0`f1`f2 f: be a 
fork.
Then again, we already have something like that in `:
You could have two different subfunctions, say `:6 to define trains the old way,
and `:7 to define them the new way.

Certainly forks can be beautiful.  We've all seen them solve real life problems
in startlingly beautiful ways.  Witness the clear, stark  +/%#  .

> But more often than a train, I need a pipe.  I have often wished that trains 
> were
> declared some other way, say with  (:f g h):  , leaving (f g h)y to mean f g 
> h y .

You can do this now, with little fuss, via [:f [:g h y

> Was this design being considered before Ken woke from his nap with the idea 
> for 
> fork?

This design, while concise, is 100% syntactic sugar - it adds nothing to the
language that could not be done before. (Hooks are currently in the same boat).

On the other hand, forks provide a totally new mechanism to the language,
one that could not be easily emulated without lots of contortions and explicit
definitions, and which greatly extends the power of the language.

One thing that it seemed that Ken had was a lot of insight into things that
were important vs. things that might have seemed "cool" but weren't important 
at all.

(Something else that people have asked for, but that would be redundant,
is a left-associative insert and insert-scan, that would give K-like behavior:
  +/ a,b,c,d -> ((a+b)+c)+d)
  +/\ a,b,c,d -> a, (a+b), ((a+b)+c), (((a+b)+c)+d)
While very useful, these can be easily defined by judicious use of ~ \. and |.,
so creating new primitives to handle them would be redundant.)

-- Mark D. Niemiec <[EMAIL PROTECTED]>

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to