On 10/11/19 8:15 PM, Sebastian Andrzej Siewior wrote:
On 2019-10-11 00:33:18 [+0200], Uladzislau Rezki (Sony) wrote:
Get rid of preempt_disable() and preempt_enable() when the
preload is done for splitting purpose. The reason is that
calling spin_lock() with disabled preemtion is forbidden in
CONFIG_PREEMPT_RT kernel.

Therefore, we do not guarantee that a CPU is preloaded, instead
we minimize the case when it is not with this change.

For example i run the special test case that follows the preload
pattern and path. 20 "unbind" threads run it and each does
1000000 allocations. Only 3.5 times among 1000000 a CPU was
not preloaded. So it can happen but the number is negligible.

V1 -> V2:
   - move __this_cpu_cmpxchg check when spin_lock is taken,
     as proposed by Andrew Morton
   - add more explanation in regard of preloading
   - adjust and move some comments

Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split 
purpose")
Reviewed-by: Steven Rostedt (VMware) <[email protected]>
Signed-off-by: Uladzislau Rezki (Sony) <[email protected]>

Acked-by: Sebastian Andrzej Siewior <[email protected]>

Acked-by: Daniel Wagner <[email protected]>

Reply via email to