"allowing us to weakly synchronize two threads" concerns me if the
synchronization is important or must be reliable. I do not understand how
volatile alone provides reliable synchronization without a mechanism to order
visible changes to memory. If the flag(s) in question are suppposed to
indicate some state has changed in this weakly synchronized behavior, without
proper memory barriers, there is no guarantee that memory changes will be
viewed by the two threads in the same order they were issued. It is quite
possible that the updated state that is flagged as being "good" or "done" or
whatever will not yet be visible across multiple cores, even though the updated
flag indicator may have become visible. Only if the flag itself is the data
can this work, it seems to me. If it is a flag that something has been
completed, volatile is not sufficient to guarantee the corresponding changes in
state will be visible. I have had such experience from code that used volatile
as a proxy for memory barriers. I was told "it has never been a problem".
Rare events can, and do, occur. In my case, it did after over 3 years running
the code without interruption. I doubt anyone had ever run the code for such a
long sample interval. We found out because we missed recording an important
earthquake a week after the race condition was tripped. Murphy's law triumphs
again. :)
Larry Baker
US Geological Survey
650-329-5608
ba...@usgs.gov
> On 12 Nov 2019, at 1:05:31 PM, George Bosilca via devel
> wrote:
>
> If the issue was some kind of memory consistently between threads, then
> printing that variable in the context of the debugger would show the value of
> debugger_event_active being false.
>
> volatile is not a memory barrier, it simply forces a load for each access of
> the data, allowing us to weakly synchronize two threads, as long as we dot
> expect the synchronization to be immediate.
>
> Anyway, good to see that the issue has been solved.
>
> George.
>
>
> On Tue, Nov 12, 2019 at 2:25 PM John DelSignore via devel
> mailto:devel@lists.open-mpi.org>> wrote:
> Hi Austen,
>
> Thanks for the reply. What I am seeing is consistent with your thought, in
> that when I see the hang, one or more processes did not have a flag updated.
> I don't understand how the Open MPI code works well enough to say if it is a
> memory barrier problem or not. It almost looks like a event delivery or
> dropped event problem to me.
> The place in the MPI_init() code where the MPI processes hang and the number
> of "hung" processes seems to vary from run to run. In some cases the
> processes are waiting for an event or waiting for a fence (whatever that is).
> I did the following run today, which shows that it can hang waiting for an
> event that apparently was not generated or was dropped:
>
> Started TV on mpirun: totalview -args mpirun -np 4 ./cpi
> Ran the mpirun process until it hit the MPIR_Breakpoint() event.
> TV attached to all four of the MPI processes and left all five processes
> stopped.
> Continued all of the processes/threads and let them run freely for about 60
> seconds. They should have run to completion in that amount of time.
> Halted all of the processes. I included an aggregated backtrace of all of the
> processes below.
> In this particular run, all four MPI processes were waiting in
> ompi_rte_wait_for_debugger() in rte_orte_module.c at line 196, which is:
>
> /* let the MPI progress engine run while we wait for debugger release
> */
> OMPI_WAIT_FOR_COMPLETION(debugger_event_active);
>
> I don't know how that is supposed to work, but I can clearly see that
> debugger_event_active was true in all of the processes, even though TV set
> MPIR_debug_gate to 1:
> d1.<> f {2.1 3.1 4.1 5.1} p debugger_event_active
> Thread 2.1:
> debugger_event_active = true (1)
> Thread 3.1:
> debugger_event_active = true (1)
> Thread 4.1:
> debugger_event_active = true (1)
> Thread 5.1:
> debugger_event_active = true (1)
> d1.<> f {2.1 3.1 4.1 5.1} p MPIR_debug_gate
> Thread 2.1:
> MPIR_debug_gate = 0x0001 (1)
> Thread 3.1:
> MPIR_debug_gate = 0x0001 (1)
> Thread 4.1:
> MPIR_debug_gate = 0x0001 (1)
> Thread 5.1:
> MPIR_debug_gate = 0x0001 (1)
> d1.<>
>
> I think the _release_fn() function in rte_orte_module.c is supposed to set
> debugger_event_active to false, but that apparently did not happen in this
> case. So, AFAICT, the reason debugger_event_active would not be set to false
> is that the event was never delivered, so the _release_fn() function was
> never called. If that's the case, then the lack of a memory barrier is
> probably a moot point, and the problem is likely related to event generation
> or dropped events.
> Cheers, John D.
>
> FWIW: Here's the aggregated backtrace after the whole job was allowed to run
> freely for about 60 seconds, and then stopped:
>
> d1.<> f g w -g f+l
> +/
> +__clone : 5:12[0-3.2-3, p1.2-5]
>