An idea that is maybe bad, but still seems interesting:

(1 + S. count) -: (1 +&({.@,) count)

so: S. =: &({.@,)


S. is an adverb that guarantees scalar arguments regardless of the shapes of 
the arguments to u.  This would achieve the optimization you are seeking 
(though #@$ is probably fast and easy), does so with user documented intention, 
and accesses a shortcut to the head scalar. 





----- Original Message -----
From: chris burke <[email protected]>
To: Source forum <[email protected]>
Sent: Wednesday, May 11, 2016 10:00 PM
Subject: [Jsource] How common are scalar operations?

From: *Henry Rich* <[email protected]>
Date: 3 May 2016 at 16:36
To: [email protected]


Looking at jtva2, I see that operations on singletons, like

count =: count+1

could be enormously sped up, especially if recognized as in-place.  I'm
keeping this in the back of my mind for future work.

Do we have any idea how common scalar operations are, especially on
problems that would be described as 'bad for J'? I remember long ago
Bernecky said they were pretty common.


Henry


----------
From: *Roger Hui* <[email protected]>
Date: 3 May 2016 at 16:43
To: Henry Rich <[email protected]>
Cc: [email protected]


Very common.  There was a study done in the 1980s on the average length of
an APL vector.  Answer: 1.2 (something like that).  I'll try to dig out the
reference.

Ken and I talked about it once.  We think (without proof) that the average
length in J may be longer, because of things like the LHS of the following:

>:x    x+1
<:x    x-1
{.x    0{x
}.x    1}.x  etc.

I am not sure that you want to spend a lot of resources on this in an
interpreter.  It's the purview of compilers to address this kind of thing,
because in a compiler you can afford to spend more time to speed up
operations on small amounts of data.  In array language it's SMALL arrays
that are hard, because you don't have a lot of elements over which to
amortize fixed costs.



----------
From: *Henry Rich* <[email protected]>
Date: 3 May 2016 at 17:31
To:
Cc: [email protected]


I won't spend a lot of computer resources on it - I was thinking that right
at the top of jtva2 there would be a line

if(AN(a)|AN(w)<=1) {

and then just vector off to a routine (or maybe have one huge switch
statement) that does the one arithmetic operation and stores the result.
If an in-place operation is detected then not even a single memory
allocation would be needed.  You have to get the type and rank right, but
it would save half a page of code setting up the loops for the general
case.  There seems to be a big difference between one atom and more than
one, in the matter of how lean the code can get.

I will keep this in mind for later.

Henry

----------
From: *Roger Hui* <[email protected]>
Date: 3 May 2016 at 18:11
To: Henry Rich <[email protected]>
Cc: [email protected]


> I'll try to dig out the reference.

Kevin Jordan, "Is APL really processing arrays?" APL Quote Quad, volume 10,
number 1, 1979, http://dl.acm.org/citation.cfm?id=602314
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm 
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to