In this patchset, we introduce two optimizations to read write semaphore.
The first optimization reduces cache bouncing of the sem->count field
by doing a pre-read of the lock status (i.e. sem->count) and avoid cmpxchg if 
possible.
The second optimization introduces similar optimistic spining logic in
the mutex code for the writer lock acquisition of rw-sem.

Combining the two patches, in testing by Davidlohr Bueso on aim7 workloads
on 8 socket 80 cores system, he saw improvements of
alltests (+14.5%), custom (+17%), disk (+11%), high_systime
(+5%), shared (+15%) and short (+4%), most of them after around 500
users when i_mmap was implemented as rwsem.

Feed-backs on the effectiveness of these tweaks on other workloads
will be appreciated.  Thanks to Peter Hurley for reviewing the first version
of this patchset.

I have left the optimistic spinning on write lock acquisition not as a default
option.  I'll like people's opinion to see if it should be on by default
to get more testing. 

Changelog:

v2:
1. Reorganize changes to down_write_trylock and do_wake into 4 patches and fixed
   a bug referencing &sem->count when sem->count is intended.
2. Fix unsafe sem->owner de-reference in rwsem_can_spin_on_owner.
the option to be on for more seasoning but can be turned off should it be 
detrimental.
3. Various patch comments update


Alex Shi (4):
  rwsem: check the lock before cpmxchg in down_write_trylock
  rwsem: remove 'out' label in do_wake
  rwsem: remove try_reader_grant label do_wake
  rwsem/wake: check lock before do atomic update

Tim Chen (1):
  rwsem: do optimistic spinning for writer lock acquisition

 include/asm-generic/rwsem.h |    8 +-
 include/linux/rwsem.h       |    3 +
 init/Kconfig                |    9 ++
 kernel/rwsem.c              |   29 +++++++-
 lib/rwsem.c                 |  175 ++++++++++++++++++++++++++++++++++++++-----
 5 files changed, 199 insertions(+), 25 deletions(-)

-- 
1.7.4.4


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to