Excessively enthusiastic refactoring. I expect to rewrite most of this as
part of the work I'm starting now for GCC13 stage1.

On Wed, Feb 9, 2022 at 2:43 AM Jonathan Wakely <jwak...@redhat.com> wrote:

> On Wed, 9 Feb 2022 at 00:57, Thomas Rodgers via Libstdc++
> <libstd...@gcc.gnu.org> wrote:
> >
> > This issue was observed as a deadloack in
> > 29_atomics/atomic/wait_notify/100334.cc on vxworks. When a wait is
> > "laundered" (e.g. type T* does not suffice as a waitable address for the
> > platform's native waiting primitive), the address waited is that of the
> > _M_ver member of __waiter_pool_base, so several threads may wait on the
> > same address for unrelated atomic<T>'s. As noted in the PR, the
> > implementation correctly exits the wait for the thread who's data
> > changed, but not for any other threads waiting on the same address.
> >
> > As noted in the PR the __waiter::_M_do_wait_v member was correctly
> exiting
> > but the other waiters were not reloaded the value of _M_ver before
> > re-entering the wait.
> >
> > Moving the spin call inside the loop accomplishes this, and is
> > consistent with the predicate accepting version of __waiter::_M_do_wait.
>
> There is a change to the memory order in _S_do_spin_v which is not
> described in the commit msg or the changelog. Is that unintentional?
>
> (Aside: why do we even have _S_do_spin_v, it's called in exactly one
> place, so could just be inlined into _M_do_spin_v, couldn't it?)
>
>

Reply via email to