On 9/10/23 04:28, guo...@kernel.org wrote:
From: Guo Ren <guo...@linux.alibaba.com>

The target of xchg_tail is to write the tail to the lock value, so
adding prefetchw could help the next cmpxchg step, which may
decrease the cmpxchg retry loops of xchg_tail. Some processors may
utilize this feature to give a forward guarantee, e.g., RISC-V
XuanTie processors would block the snoop channel & irq for several
cycles when prefetch.w instruction (from Zicbop extension) retired,
which guarantees the next cmpxchg succeeds.

Signed-off-by: Guo Ren <guo...@linux.alibaba.com>
Signed-off-by: Guo Ren <guo...@kernel.org>
---
  kernel/locking/qspinlock.c | 5 ++++-
  1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index d3f99060b60f..96b54e2ade86 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -223,7 +223,10 @@ static __always_inline void 
clear_pending_set_locked(struct qspinlock *lock)
   */
  static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
  {
-       u32 old, new, val = atomic_read(&lock->val);
+       u32 old, new, val;
+
+       prefetchw(&lock->val);
+       val = atomic_read(&lock->val);
for (;;) {
                new = (val & _Q_LOCKED_PENDING_MASK) | tail;

That looks a bit weird. You pre-fetch and then immediately read it. How much performance gain you get by this change alone?

Maybe you can define an arch specific primitive that default back to atomic_read() if not defined.

Cheers,
Longman

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to