On Thu, Jan 31, 2019 at 09:49:52AM +0100, Peter Zijlstra wrote:
> On Wed, Jan 30, 2019 at 01:34:19PM -0800, Alexei Starovoitov wrote:
> > On Wed, Jan 30, 2019 at 10:05:29PM +0100, Peter Zijlstra wrote:

> > > +static inline void __bpf_spin_lock(struct bpf_spin_lock *lock)
> > > +{
> > > + atomic_t *l = (void *)lock;
> > > + do {
> > > +         atomic_cond_read_relaxed(l, !VAL);
> > 
> > wow. that's quite a macro magic.
> 
> Yeah, C sucks for not having lambdas, this was the best we could come up
> with.
> 
> This basically allows architectures to optimize the
> wait-for-variable-to-change thing. Currently only ARM64 does that, I
> have a horrible horrible patch that makes x86 use MONITOR/MWAIT for
> this, and I suppose POWER should use it but doesn't.

Nick, do you guys want something like this?

---
diff --git a/arch/powerpc/include/asm/barrier.h 
b/arch/powerpc/include/asm/barrier.h
index fbe8df433019..111984c5670d 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -99,6 +99,20 @@ do {                                                         
        \
 #define barrier_nospec()
 #endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
+#define smp_cond_load_relaxed(ptr, cond_expr) ({               \
+       typeof(ptr) __PTR = (ptr);                              \
+       typeof(*ptr) VAL = READ_ONCE(*__PTR);                   \
+       if (unlikely(!(cond_expr))) {                           \
+               spin_begin();                                   \
+               do {                                            \
+                       spin_cpu_relax();                       \
+                       VAL = READ_ONCE(*__PTR);                \
+               } while (!(cond_expr));                         \
+               spin_end();                                     \
+       }                                                       \
+       VAL;                                                    \
+})
+
 #include <asm-generic/barrier.h>
 
 #endif /* _ASM_POWERPC_BARRIER_H */

Reply via email to