Joseph,

I also noted the MPI_Info "alloc_shared_noncontig" is unused.
I do not know whether this is necessary or not, but if you do want to use
it, this should be used once with MPI_Win_create_dynamic

Cheers,

Gilles

On Thursday, August 25, 2016, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Joseph,
>
> at first glance, there is a memory corruption (!)
> the first printf should be 0 -> 100, instead of 0 -> 3200
>
> this is very odd because nelems is const, and the compiler might not even
> allocate this variable.
>
> I also noted some counter intuitive stuff in your test program
> (which still looks valid to me)
>
> neighbor = (rank +1) / size;
> should it be
> neighbor = (rank + 1) % size;
> instead ?
>
> the first loop is
> for (elem=0; elem < nelems-1; elem++) ...
> it could be
> for (elem=0; elem < nelems; elem++) ...
>
> the second loop uses disp_set, and I guess you meant to use disp_set2
>
> I will try to reproduce this crash.
> which compiler (vendor and version) are you using ?
> which compiler options do you pass to mpicc ?
>
>
> Cheers,
>
> Gilles
>
> On Thursday, August 25, 2016, Joseph Schuchart <schuch...@hlrs.de
> <javascript:_e(%7B%7D,'cvml','schuch...@hlrs.de');>> wrote:
>
>> All,
>>
>> It seems there is a regression in the handling of dynamic windows between
>> Open MPI 1.10.3 and 2.0.0. I am attaching a test case that works fine with
>> Open MPI 1.8.3 and fail with version 2.0.0 with the following output:
>>
>> ===
>> [0] MPI_Get 0 -> 3200 on first memory region
>> [cl3fr1:7342] *** An error occurred in MPI_Get
>> [cl3fr1:7342] *** reported by process [908197889,0]
>> [cl3fr1:7342] *** on win rdma window 3
>> [cl3fr1:7342] *** MPI_ERR_RMA_RANGE: invalid RMA address range
>> [cl3fr1:7342] *** MPI_ERRORS_ARE_FATAL (processes in this win will now
>> abort,
>> [cl3fr1:7342] ***    and potentially your MPI job)
>> ===
>>
>> Expected output is:
>> ===
>> [0] MPI_Get 0 -> 100 on first memory region:
>> [0] Done.
>> [0] MPI_Get 0 -> 100 on second memory region:
>> [0] Done.
>> ===
>>
>> The code allocates a dynamic window and attaches two memory regions to it
>> before accessing both memory regions using MPI_Get. With Open MPI 2.0.0,
>> only access to the both memory regions fails. Access to the first memory
>> region only succeeds if the second memory region is not attached. With Open
>> MPI 1.10.3, all MPI operations succeed.
>>
>> Please let me know if you need any additional information or think that
>> my code example is not standard compliant.
>>
>> Best regards
>> Joseph
>>
>>
>> --
>> Dipl.-Inf. Joseph Schuchart
>> High Performance Computing Center Stuttgart (HLRS)
>> Nobelstr. 19
>> D-70569 Stuttgart
>>
>> Tel.: +49(0)711-68565890
>> Fax: +49(0)711-6856832
>> E-Mail: schuch...@hlrs.de
>>
>>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to