Miller, Raul D wrote:
Skip Cave wrote:
For example the function "parallel" could be used to place the function in the
right argument into a specific processor, and then continue execution with the next
statement in the script. So the script:
parallel A
parallel B
parallel C
Would result in the functions A, B, and C being run on separate threads, with
each thread placed in a separate processor.
Raul said:
But what does that mean?
Let's say that we've got several arrays, and the functions A, B and C
correspond to the three blocks generated by:
var=: ([EMAIL PROTECTED] { ])bind'abcdefghijklmnopqrstuvwxyz'"0 bind i.
op=: ([EMAIL PROTECTED] { ])bind'+*%,-'"0 bind i.
exp=: var,.'=',.':',.var,.op,.var
exp"0(4 4 4)
What happens when the same variable is referenced in
multiple threads?
Skip says:
I believe that you are proposing the test script:
A =: var=: ([EMAIL PROTECTED] { ])bind'abcdefghijklmnopqrstuvwxyz'"0 bind i.
B =: op=: ([EMAIL PROTECTED] { ])bind'+*%,-'"0 bind i.
C =: exp=: var,.'=',.':',.var,.op,.var
parallel A
parallel B
parallel C
exp"0(4 4 4)
The priority of execution of the parallel functions is defined by the
order of parallel commands in the script. If two or more of the parallel
functions access the same variable, the first parallel function executed
in the script has precedence on the variable, so any other parallel
functions will block until the first is completed. So, "op" will block
until "var" is done. "exp" will block until "op" is done. Therefore,
this particular set of functions you defined would not have much
parallelism, because of the data dependence. This doesn't mean that the
parallel mechanism is bad, just that the example you picked doesn't lend
itself to parallel execution. It possibly could be re-written for more
parallelism, however.
Parallelism, by its nature, doesn't deal well with multiple processes
mucking with the same data at the same time. Any "parallel" construct in
the language would have to prevent that by blocking access to the data
being modified by a higher-priority process, whenever that is attempted.
Random results and errors? Modify in place gets replaced
by a slower technique?
Not sure what you mean here. Perhaps an example would help me understand.
(Would it acceptable to make J substantially slower in contexts where it's currently fast so that it can run multiple threads?)
No. The interpreter should be designed to attempt parallelization only
if it will significantly speed up the execution of a specific primitive.
The initial test for whether parallelization is appropriate should be
trivial enough to avoid significant impact on non-parallel execution. So
a "parallel J" should be no slower than a non-parallel J in all cases,
and be much faster in most cases.
Skip
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm