APVs are only used in the result of 5!:5 and
not more generally.  Previous old experience in 
APL interpreters showed that APVs (as a datatype)
were not worth the extra complexity.



----- Original Message -----
From: Devon McCormick <[email protected]>
Date: Monday, February 15, 2010 17:24
Subject: Re: [Jchat] [Jgeneral] Upgrade to a quad core
To: Chat forum <[email protected]>

> Since, on my "Core(TM)2 Duo CPU T6400 @ 2.00 GHz",
> 
>    6!:2 '(i.1e6)+i.1e6'
> 0.016344815
> 
> it's unclear even this would benefit sufficiently from parallel 
> execution to
> make it worthwhile.
> 
> For any arbitrary function on arbitrary arguments, it's even 
> less clear that
> fine-grained parallelism is worth the trouble.  I'm not 
> arguing that it
> would never be worthwhile, just that it's unclear and requires a 
> lot of
> effort to clarify it, even for a specific function much less for 
> the general
> case.
> 
> "i." is probably a particularly unworthy basis for deciding to 
> go parallel
> given Roger's recent comments indicating that J's already doing 
> somethingwith APVs (arithmetic progression vectors) for lengthy 
> vectors.
> On Mon, Feb 15, 2010 at 2:06 PM, Matthew Brand 
> <[email protected]>wrote:
> > Recognising that 1 2 + 2 4 is a loop with 2 iterations the 
> decision could
> > be
> > made to do it in serial.
> >
> > Recognising that (i.1000000) + (i.1000000) is a loop with many 
> iterations> the decision could be made to explore doing "+" in 
> parallel.>
> > Isn't this the type of decision that i. does for algorithm 
> selection?>
> > On 15 February 2010 18:49, Devon McCormick 
> <[email protected]> wrote:
> >
> > > Raul - yes - there's always been a lot of hand-waving magic 
> about the
> > > benefits of parallel processing but many a pretty ship of 
> theory has
> > > foundered on the cold hard rocks of fact.  Until you 
> consider a specific
> > > problem and do the work, you can't make any claims about the 
> benefits.> >
> > > In fact, it's easy to offer a "dis-proof of concept": parallelize
> > >
> > >   1 2+3 4
> > >
> > > I bet any parallel version of this will lose to the non-
> parallel version
> > -
> > > there's no way the overhead cost of breaking up and re-
> assembling the
> > > results of this small piece of work is less than simply 
> doing it.
> > >
> > > We talked about this at the last NYCJUG and I'm glad to see 
> it's still a
> > > pressing topic as this will motivate me to update the wiki 
> for February's
> > > meeting.
> > >
> > > Regards,
> > >
> > > Devon
> > >
> > > On Mon, Feb 15, 2010 at 12:33 PM, Raul Miller 
> <[email protected]>> > wrote:
> > >
> > > > On Mon, Feb 15, 2010 at 11:00 AM, bill lam 
> <[email protected]>> wrote:
> > > > > Apart from the startup time and memory cost, the biggest 
> problems are
> > > > > the need to tailor the algorithm by programmer for each 
> for its
> > > > > applications, synchronisation of sessions, the memory 
> bandwidth it
> > > > > took to transfer data between session. OTOH the low 
> level solution is
> > > > > transparent, J programmers will not even need to aware 
> its existence.
> > > > > Henry Rich had also suggested this approach if memory 
> served me.
> > > >
> > > > One problem with the low level approach seems to be that, so
> > > > far, no one wants to fund the investigative costs 
> associated with
> > > > this change.
> > > >
> > > > To my knowledge no one has even shown a "proof of concept"
> > > > implementation for any primitive.
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to