Hi Mikael,

> > In this example, image 1, i.e., for 
> > Opencoarrays a thread on image one takes the data from the executing 
> > image and writes it into the memory of image 1.  
> When you say it takes data, do you mean it takes the assignment right 
> hand side (named "data"), or do you mean that it takes all required data 
> (right hand side "data" and index value initialized with the result of 
> "get_val()") from the executing image?

Both! Always keep in mind that an expression like 

res(this_image())[1] = 42

executed on image 2, manipulates memory of process/image 1. When those
images are not running on the same machine (like in MPI possible), then the
(evaluated) index, here this_image(), and the evaluated rhs need to be
send to image 1 like in this example. On image 1 a routine is called,
that looks like this (pseudo C, abbreviated):

void caf_accessor_res_1(struct array_integer_t &res, void *rhs, struct
add_data_t *add_data) {

  int *int_rhs = (int *)rhs;
  res.data[add_data->this_image_val) = *int_rhs;
}

The above routine is generated by the Fortran compiler from a gfc_code
structure that models it in Fortran. I went that way to have exactly the
assignment behavior of Fortran. This way assigning res(1:N)[...] = rhs(1:N) does
no trigger N communication for assigning scalars, but the vector is send as a
block and the loop to modify the data in res is done in the accessor
(significantly faster).

This routine is executed on the remote image, here image 1. Note, that it lacks
the coindex now, because that is the implementation of coindexing. For brevity
I left out all the boilerplate that is implemented in OpenCoarrays. 

> 
> > For caf_shmem there is not additional thread, because every image can 
> > write directly to the remote image's memory.
> > 
> > Did that clear a bit of the confusion?  
> For me it did a bit, but I wouldn't say I'm completely out of the fog.
> Obviously I should at some point take the time to understand the general 
> architecture of the coarray implementation.

Then let's continue with small but continuous steps. 

Regards,
        Andre
-- 
Andre Vehreschild * Email: vehre ad gmx dot de 

Reply via email to