https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106183

            Bug ID: 106183
           Summary: std::atomic::wait might deadlock on platforms without
                    platform_wait()
           Product: gcc
           Version: unknown
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: libstdc++
          Assignee: unassigned at gcc dot gnu.org
          Reporter: lewissbaker.opensource at gmail dot com
  Target Milestone: ---

I have been doing some research on implementations of std::atomic::notify_all()
and std::atomic::wait() as part of a C++ paper I've been working on.

I've recently been studying the libc++ implementation and I think I have
discovered a potential bug in the implementation for platforms that do not have
__GLIBCXX_HAVE_PLATFORM_WAIT defined (i.e. that don't have futex syscall or
similar) and for std::atomic<T> types where T is different from
__platform_wait_t.

I believe there is potential for a thread calling x.wait(old) to fail to be
unblocked by a call by another thread to x.notify_all() after modifying the
value to something other than 'old'.

I have reduced the current implementation of the std::atomic<T>::wait() and
std::atomic<T>::notify_all() functions and I believe the code currently in
trunk to be effectively equivalent to the following code-snippet:

------
using __platform_wait_t = std::uint64_t;

struct __waiter_pool {
  std::atomic<__platform_wait_t> _M_wait{0};
  std::mutex _M_mut;
  std::atomic<__platform_wait_t> _M_ver{0};
  std::condition_variable _M_cv;

  static __waiter_pool& _S_for(void* __addr) noexcept {
    constexpr uintptr_t __count = 16;
    static __waiter_pool __x[__count];
    uintptr_t __key = (((uintptr_t)__addr) >> 2) % __count;
    return __x[__key];
  }
};

template<typename _Tp>
bool __atomic_compare(const _Tp& __a, const _Tp& __b) noexcept {
  return std::memcmp(std::addressof(__a), std::addressof(__b), sizeof(_Tp)) ==
0;
}

template<typename T>
void atomic<T>::wait(T __old, memory_order __mo = memory_order_seq_cst)
noexcept {
  __waiter_pool& __w = __waiter_pool::_S_for(this);
  __w._M_wait.fetch_add(1, std::memory_order_seq_cst);
  do {
    __platform_wait_t __val1 = __w._M_ver.load(std::memory_order_acquire);
    if (!__atomic_compare(__old, this->load(__mo))) {
        return;
    }

    __platform__wait_t __val2 = __w._M_ver.load(std::memory_order_seq_cst);
    // <---- BUG: problem if notify_all() is executed at this point
    if (__val2 == __val1) {
        lock_guard<mutex> __lk(__w._M_mtx);
        __w._M_cv.wait(__lk);
    }

  } while (__atomic_compare(__old, this->load(__mo)));

  __w._M_wait.fetch_sub(1, std::memory_order_release);    
}

void atomic<T>::notify_all() noexcept {
    __waiter_pool& __w = __waiter_pool::_S_for(this);
    __w._M_ver.fetch_add(1, memory_order_seq_cst);
    if (__w._M_wait.load(memory_order_seq_cst) != 0) {
        __w._M_cv.notify_all();
    }
}
-------

The wait() method reads the _M_ver value then checks whether the value being
waited on has changed
and then if it has not, then reads the _M_ver value again. If the two values
read from _M_ver are
the same then we can infer that there has not been an intervening call to
notify_all().

However, after checking that _M_ver has not changed, it then (and only then)
proceeds to acquire
the lock on the _M_mut mutex and then waits on the _M_cv condition variable.

The problem occurs if the waiting thread happens to be delayed between reading
_M_ver for the second
time and blocking inside the call to  _M_cv.wait() (indicated with a comment).
If this happens, it may be possible then for another thread that was supposed
to unblock this thread
to then modify the atomic value, call notify_all() which increments _M_ver and
calls _M_cv.notify_all(),
all before the waiting thread acquires the mutex and blocks on the
condition-variable.

If this happens and no other thread is subsequently going to call notify_all()
on the atomic variable
then it's possible the call to wait() will block forever as it missed its
wake-up call.

The solution here is to do more work while holding the mutex.

I haven't fully verified the correctness of the following code, but I think it
should help to
avoid the missed wake-ups situations that are possible in the current
implementation. It does
come at a higher synchronisation cost, however, as notifying threads also need
to acquire the
mutex.

-------------

template<typename T>
void atomic<T>::wait(T __old, memory_order __mo = memory_order_seq_cst)
noexcept {
  __waiter_pool& __w = __waiter_pool::_S_for(this);
  __w._M_wait.fetch_add(1, std::memory_order_seq_cst);
  do {
    __platform_wait_t __val1 = __w._M_ver.load(std::memory_order_acquire);
    if (!__atomic_compare(__old, this->load(__mo))) {
        return;
    }

    __platform__wait_t __val2 = __w._M_ver.load(std::memory_order_seq_cst);
    if (__val2 == __val1) {
        lock_guard<mutex> __lk(__w._M_mtx);
        // read again under protection of the lock
        __val2 = __w._M_ver.load(std::memory_order_seq_cst);
        if (__val2 == __val1) {
          __w._M_cv.wait(__lk);
        }
    }
  } while (__atomic_compare(__old, this->load(__mo)));

  __w._M_wait.fetch_sub(1, std::memory_order_release);    
}

void atomic<T>::notify_all() noexcept {
    __waiter_pool& __w = __waiter_pool::_S_for(this);
    if (__w._M_wait.load(memory_order_seq_cst) != 0) {
      // need to increment _M_ver while holding the lock
      {
        lock_guard<mutex> __lk{__w._M_mtx};
        __w._M_ver.fetch_add(1, memory_order_seq_cst);
      }
      __w._M_cv.notify_all();
    }
}
-------------

Reply via email to