Changes from v1 (
- Got rid of the signal_pending check in wakeup fastpath. (patch 2)
- Added read/access once to queue.status (we're obviously concerned about
 lockless access upon unrelated events, even if on the stack).
- Got rid of initializing wake_q and wake_up_q call upon perform_atomic_semop
  error return path. (patch 2)
- Documented ordering between wake_q_add and setting ->status.
- What I did not do was refactor the checks in perfor_atomic_semop[_slow]
  as I could not get a decent/clean way of doing it without adding more
  unnecessary code. If we wanted to do smart semop scans that we received from
  userspace, this would still need to be done under sem_lock for semval values
  obviously. So I've left it as is, where we mainly duplicate the function, but
  I still believe this is the most straightforward way of dealing with this
  situation  (patch 3).
- Replaced using SEMOP_FAST with BITS_PER_LONG, as this is really what we want
  to limit the duplicate scanning.
- More testing.
- Added Manfred's ack (patch 5).


Here are a few updates around the semop syscall handling that I noticed while
reviewing Manfred's simple vs complex ops fixes. Changes are on top of -next,
which means that Manfred's pending patches to ipc/sem.c that remove the 
barrier(s) would probably have to be rebased.

The patchset has survived the following testscases:
- ltp
- ipcsemtest (
- ipcscale (

Details are in each individual patch. Please consider for v4.9.


Davidlohr Bueso (5):
  ipc/sem: do not call wake_sem_queue_do() prematurely
  ipc/sem: rework task wakeups
  ipc/sem: optimize perform_atomic_semop()
  ipc/sem: explicitly inline check_restart
  ipc/sem: use proper list api for pending_list wakeups

 ipc/sem.c | 415 ++++++++++++++++++++++++++++++--------------------------------
 1 file changed, 199 insertions(+), 216 deletions(-)


Reply via email to