Ziaul -

You're right, I totally misread the code, sorry about that.  What version
of Open MPI are you using and over what network?

As an aside, there's no point in using MPI_ALLOC_MEM for the displacement
arrays.  MPI_ALLOC_MEM is really only advantageous for allocating
communication buffers.  Since you know the displacement arrays aren't
going to be used for communication, you're just tying up (potentially
scarce) network resources by using MPI_ALLOC_MEM there.

Biran

On 6/6/12 11:24 AM, "Ziaul Haque Olive" <mzh.ol...@gmail.com> wrote:

>Hello Brian,
>
>Actually, I am not modifying the local communication buffer that contains
>the data. I am modifying the the buffer that contains the indices of the
>data buffer(source_disp and target_disp).
>
>in MPICH2 this is not a problem. I am not sure about Open MPI.
>
>Thanks,
>Ziaul
>
>On Wed, Jun 6, 2012 at 1:05 PM, Barrett, Brian W <bwba...@sandia.gov>
>wrote:
>
>Ziaul -
>
>Your program is erroneous; you can not modify the local communication
>buffer of an MPI_ACCUMULATE call until after the next synchronization call
>(Section 11.3 of MPI 2.2).  In your example, that would be after the
>MPI_FENCE call following the call to MPI_ACCUMULATE.
>
>Brian
>
>On 6/6/12 9:44 AM, "Ziaul Haque Olive" <mzh.ol...@gmail.com> wrote:
>
>>Hello,
>>
>>I am not sure, if my code is correct according to Open MPI(v1.6). the
>> code is given as follows, I am doing MPI one-sided communication inside
>> a function - data_transfer. this function is being called inside a
>>fence epoch. inside data_transfer, I am allocating memory for
>>non-contiguous data, creating derived data type, using this datatype in
>>MPI_Accumulate, and after calling MPI_Accumulate, freeing the indexed
>>data type and also freeing the memory containing indices for indexed
>>data type. is it okay that I am freeing memory for derived datatype
>>before the closing
>>fence?
>>
>>I am getting segmentation fault with this code. if I comment out the
>>MPI_Accumulate call, then no seg-fault occurs.
>>
>>
>>
>>void data_transfer(void *data, int
>>*sources_disp, int *targets_disp, int *target, int size, int *blength,
>>int func, MPI_Op op, MPI_Win win, MPI_Datatype dtype){
>>
>>    int i,j, index;
>>    int tmp_target;
>>    int *flag;
>>    int *source_disp;
>>    int *target_disp;
>>    MPI_Datatype source_type, target_type;
>>
>>
>>    MPI_Alloc_mem( size*sizeof(int), MPI_INFO_NULL, &source_disp);
>>    MPI_Alloc_mem( size*sizeof(int), MPI_INFO_NULL, &target_disp);
>>    MPI_Alloc_mem( size*sizeof(int), MPI_INFO_NULL, &flag );
>>
>>    memset(flag, 0, size*sizeof(int));
>>
>>    for(i=0;i<size;i++){
>>        if(flag[i]==0){
>>            tmp_target = target[i];
>>
>>            index = 0;
>>            for(j=i; j<size; j++){
>>                if(flag[j]==0 && tmp_target == target[j] ){
>>                    source_disp[index] = sources_disp[j];
>>                    target_disp[index] = targets_disp[j];
>>                    //printf("src, target disp %d  %d\n", j, disp[j]);
>>                    index++;
>>                    flag[j] = 1;
>>                 }
>>            }
>>
>>            MPI_Type_indexed(index, blength , source_disp, dtype,
>>&source_type);
>>            MPI_Type_commit(&source_type);
>>            MPI_Type_indexed(index, blength , target_disp, dtype,
>>&target_type);
>>            MPI_Type_commit(&target_type);
>>
>>
>>            MPI_Accumulate( data, 1, source_type, tmp_target, 0, 1,
>>target_type , op, win);
>>
>>            MPI_Type_free(&source_type);
>>            MPI_Type_free(&target_type);
>>        }
>>    }
>>    MPI_Free_mem(source_disp);
>>    MPI_Free_mem(target_disp);
>>    MPI_Free_mem(flag);
>>
>>}
>>
>>void main(){
>>    int i;
>>    while(i<N){
>>             MPI_Win_fence(MPI_MODE_NOPRECEDE, queue2_win);
>>
>>             data_transfer();
>>
>>             MPI_Win_fence(MPI_MODE_NOSUCCEED, queue2_win);
>>    }
>>}
>>
>>thanks,
>>Ziaul
>>
>>
>
>
>>_______________________________________________
>>users mailing list
>>us...@open-mpi.org
>>http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>--
>  Brian W. Barrett
>  Dept. 1423: Scalable System Software
>  Sandia National Laboratories
>
>
>
>
>
>
>_______________________________________________
>users mailing list
>us...@open-mpi.org
>http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
>_______________________________________________
>users mailing list
>us...@open-mpi.org
>http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
  Brian W. Barrett
  Dept. 1423: Scalable System Software
  Sandia National Laboratories






Reply via email to