>
> Does the problem go away if
> you replace the definition of uf with the following?
> uf =. {{u t. (y;'worker') ''}}"0


It does in a manner that is illuminating. It appears the calls are each
serialized on single threads in pool 0. However, the threads compute in
parallel within the pool.
This is visible in the example below (and in live utilization observation).
Total time meets expectations on compute-bound parallelization. Said
differently, ' "0 ' is running in parallel, not ' +/  * . ' .
Runtime no longer displays a cycle with the 'worker' keyword added.

setpools 4

4 0 4

'A B' =: fmf 2 # 1024

f =. {{(20) 6!:2 'A +/ . * B'}} NB. Matrix multiplication timing

uf =. {{u t. (y;'worker') ''}}"0 NB. Run u in threadpool y

totaltime =. 6!:2 'res =. > (((f uf)@0:))"0 (96 # 0) NB. Run in (pool 0) 96
times'

(+/%#) res

0.117002

totaltime % 96 * 20

0.0293179 NB. total time shows parallelization across the pool


Does the 'worker' mean specifically 'only one worker,' or all workers
available in that pool? If one worker, does this also constrain
parallelized primitives to one worker in pool 0, as is implied above?


I agree with Raul's intuition on pool 0: sharing pool 0 seems to undermine
the task-to-dedicated-resource model that pooling implies, OS scheduling,
API limitations, and the pvk.ca article notwithstanding. Perhaps there's
some physical-hardware reason for this, but it so far defies intuition.


I will next explore compute-bound, parallelized primitives called from
outside pool 0.


John




On Sun, Jan 22, 2023 at 1:46 AM Raul Miller <[email protected]> wrote:

> Ok... if I understand correctly, instead of "parallelizable primitive"
> being executed only in pool 0, nuvoc should instead state that
> "parallelized primitives" will only be executed in pool 0.
>
> And, these primitives should be explicitly enumerated in nuvoc in that
> context. (Either in that part of that page
> (Vocabulary/tcapdot#Threadpools) or on some other page which is
> referenced from that part of that page.)
>
> I suppose I can probably figure this out for myself, so that I can
> update nuvoc. Probably I need to look for primitives whose
> implementation routine's call tree includes a routine whose
> implementation might use jtjobrun. But I'll have to think a bit about
> this issue before I could come up with a reasonably reliable approach
> for catching these calltrees. (If you could list the primitives, that
> would be great.)
>
> Thanks,
>
> --
> Raul
>
>
> On Sun, Jan 22, 2023 at 4:22 AM Elijah Stone <[email protected]> wrote:
> >
> > It's 'complicated' as usual, but in general parallelising things like +
> and *
> > is a waste of resources.  Currently parallelised are +/ .* and a few of
> the
> > 128!: ops.
> >
> > On Sun, 22 Jan 2023, Raul Miller wrote:
> >
> > > On Sun, Jan 22, 2023 at 2:46 AM Elijah Stone <[email protected]>
> wrote:
> > >> And parallelisable primitives will
> > >> always be run by threads in pool 0 (for now); never by threads in the
> pool
> > >> where they were kicked off (unless that happens to be pool 0).
> > >
> > > I see mention of this in nuvoc:
> > >
> > > https://code.jsoftware.com/wiki/Vocabulary/tcapdot#Threadpools
> > >
> > > But I do not understand which primitives are "parallelizable".
> > >
> > > Intuitively, I imagine that this would be all primitive operations
> > > (either monadic or dyadic) with a non-infinite rank and no required
> > > side effects.
> > >
> > > But expecting that all addition and multiplication happens in pool 0
> > > for a task running in pool 1 baffles me, as a design decision. If my
> > > interpretation was correct, what would be the current advantage for
> > > using this approach? Or, if my interpretation is wrong, what does
> > > "parallelizable primitive" mean?
> > >
> > > Thanks,
> > >
> > >
> > > --
> > > Raul
> > > ----------------------------------------------------------------------
> > > For information about J forums see http://www.jsoftware.com/forums.htm
> > ----------------------------------------------------------------------
> > For information about J forums see http://www.jsoftware.com/forums.htm
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
>
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to