On Wed, Apr 03, 2019 at 12:33:20PM -0400, Waiman Long wrote:
> static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32
> val)
> {
> if (static_branch_unlikely(&use_numa_spinlock))
> numa_queued_spin_lock_slowpath(lock, val);
> else
> native_queued_spin_lock_slowpath(lock, val);
> }
That's horrible for the exact reason you state.
> Alternatively, we can also call numa_queued_spin_lock_slowpath() in
> native_queued_spin_lock_slowpath() if we don't want to increase the code
> size of spinlock call sites.
Yeah, still don't much like that though, we're littering the fast path
of that slow path with all sorts of crap.