Well, that is embarrassing! Thank you so much for figuring this out and
providing a detailed answer (also thanks to everyone else who tried to
reproduce it). I guess I assumed some synchronization in lock_all even
though I know that it is not collective. With an additional barrier
between
On 03/09/2017 03:10 PM, Steffen Christgau wrote:
>
> Since you are using
> the unified model, you can omit the proposed exclusive lock (see above)
> as well.
To be fair, you have to be cautious when doing that - even in the
unified model. See example 11.7 in the MPI-3.1 standard. In that
Hi Joseph,
in your code, you are updating the local buffer, which is also exposed
via the window, right after the lock_all call, but the stores
(baseptr[i] = 1000 + loffs++, let's call those the buffer
initialization) are may overwrite the outcome of other concurrent
operations, i.e. the
)
Best
Christoph
- Original Message -
From: "Howard Pritchard" <hpprit...@gmail.com>
To: "Open MPI Users" <users@lists.open-mpi.org>
Sent: Friday, March 3, 2017 9:02:22 PM
Subject: Re: [OMPI users] Shared Windows and MPI_Accumulate
Hello Joseph,
I'm still
Hello Joseph,
I'm still unable to reproduce this system on my SLES12 x86_64 node.
Are you building with CFLAGS=-O3?
If so, could you build without CFLAGS set and see if you still see the
failure?
Howard
2017-03-02 2:34 GMT-07:00 Joseph Schuchart :
> Hi Howard,
>
> Thanks
Hi Joseph,
I built this test with craypich (Cray MPI) and it passed. I also tried
with Open MPI master and the test passed. I also tried with 2.0.2
and can't seem to reproduce on my system.
Could you post the output of config.log?
Also, how intermittent is the problem?
Thanks,
Howard