Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-04-04 Thread Mark Rutland
On Wed, Apr 04, 2018 at 06:36:25AM +0300, Yury Norov wrote:
> On Tue, Apr 03, 2018 at 02:48:32PM +0100, Mark Rutland wrote:
> > On Sun, Apr 01, 2018 at 02:11:08PM +0300, Yury Norov wrote:
> > > @@ -840,8 +861,10 @@ el0_svc:
> > >   mov wsc_nr, #__NR_syscalls
> > >  el0_svc_naked:   // compat entry point
> > >   stp x0, xscno, [sp, #S_ORIG_X0] // save the original x0 and 
> > > syscall number
> > > + isb_if_eqs
> > >   enable_dbg_and_irq
> > > - ct_user_exit 1
> > > + ct_user_exit
> > 
> > I don't think this is safe. here we issue the ISB *before* exiting a
> > quiesecent state, so I think we can race with another CPU that calls
> > kick_all_active_cpus_sync, e.g.
> > 
> > CPU0CPU1
> > 
> > ISB
> > patch_some_text()
> > kick_all_active_cpus_sync()
> > ct_user_exit
> > 
> > // not synchronized!
> > use_of_patched_text()
> > 
> > ... and therefore the ISB has no effect, which could be disasterous.
> > 
> > I believe we need the ISB *after* we transition into a non-quiescent
> > state, so that we can't possibly miss a context synchronization event.
>  
> I decided to put isb() in entry because there's a chance that there will
> be patched code prior to exiting a quiescent state.

If we do patch entry text, then I think we have no option but to use
kick_all_active_cpus_sync(), or we risk races similar to the above.

> But after some headscratching, I think it's safe. I'll do like you
> suggested here.

Sounds good.

Thanks,
Mark.


Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-04-03 Thread Yury Norov
Hi Mark,

Thank you for review.

On Tue, Apr 03, 2018 at 02:48:32PM +0100, Mark Rutland wrote:
> Hi Yury,
> 
> On Sun, Apr 01, 2018 at 02:11:08PM +0300, Yury Norov wrote:
> > +/*
> > + * Flush I-cache if CPU is in extended quiescent state
> > + */
> 
> This comment is misleading. An ISB doesn't touch the I-cache; it forces
> a context synchronization event.
> 
> > +   .macro  isb_if_eqs
> > +#ifndef CONFIG_TINY_RCU
> > +   bl  rcu_is_watching
> > +   tst w0, #0xff
> > +   b.ne1f
> 
> The TST+B.NE can be a CBNZ:
> 
>   bl  rcu_is_watching
>   cbnzx0, 1f
>   isb
> 1:
> 
> > +   /* Pairs with aarch64_insn_patch_text for EQS CPUs. */
> > +   isb
> > +1:
> > +#endif
> > +   .endm
> > +
> >  el0_sync_invalid:
> > inv_entry 0, BAD_SYNC
> >  ENDPROC(el0_sync_invalid)
> > @@ -840,8 +861,10 @@ el0_svc:
> > mov wsc_nr, #__NR_syscalls
> >  el0_svc_naked: // compat entry point
> > stp x0, xscno, [sp, #S_ORIG_X0] // save the original x0 and 
> > syscall number
> > +   isb_if_eqs
> > enable_dbg_and_irq
> > -   ct_user_exit 1
> > +   ct_user_exit
> 
> I don't think this is safe. here we issue the ISB *before* exiting a
> quiesecent state, so I think we can race with another CPU that calls
> kick_all_active_cpus_sync, e.g.
> 
>   CPU0CPU1
> 
>   ISB
>   patch_some_text()
>   kick_all_active_cpus_sync()
>   ct_user_exit
> 
>   // not synchronized!
>   use_of_patched_text()
> 
> ... and therefore the ISB has no effect, which could be disasterous.
> 
> I believe we need the ISB *after* we transition into a non-quiescent
> state, so that we can't possibly miss a context synchronization event.
 
I decided to put isb() in entry because there's a chance that there will
be patched code prior to exiting a quiescent state. But after some
headscratching, I think it's safe. I'll do like you suggested here.

Thanks,
Yury


Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-04-03 Thread Mark Rutland
Hi Yury,

On Sun, Apr 01, 2018 at 02:11:08PM +0300, Yury Norov wrote:
> +/*
> + * Flush I-cache if CPU is in extended quiescent state
> + */

This comment is misleading. An ISB doesn't touch the I-cache; it forces
a context synchronization event.

> + .macro  isb_if_eqs
> +#ifndef CONFIG_TINY_RCU
> + bl  rcu_is_watching
> + tst w0, #0xff
> + b.ne1f

The TST+B.NE can be a CBNZ:

bl  rcu_is_watching
cbnzx0, 1f
isb
1:

> + /* Pairs with aarch64_insn_patch_text for EQS CPUs. */
> + isb
> +1:
> +#endif
> + .endm
> +
>  el0_sync_invalid:
>   inv_entry 0, BAD_SYNC
>  ENDPROC(el0_sync_invalid)
> @@ -840,8 +861,10 @@ el0_svc:
>   mov wsc_nr, #__NR_syscalls
>  el0_svc_naked:   // compat entry point
>   stp x0, xscno, [sp, #S_ORIG_X0] // save the original x0 and 
> syscall number
> + isb_if_eqs
>   enable_dbg_and_irq
> - ct_user_exit 1
> + ct_user_exit

I don't think this is safe. here we issue the ISB *before* exiting a
quiesecent state, so I think we can race with another CPU that calls
kick_all_active_cpus_sync, e.g.

CPU0CPU1

ISB
patch_some_text()
kick_all_active_cpus_sync()
ct_user_exit

// not synchronized!
use_of_patched_text()

... and therefore the ISB has no effect, which could be disasterous.

I believe we need the ISB *after* we transition into a non-quiescent
state, so that we can't possibly miss a context synchronization event.

Thanks,
Mark.


Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-04-01 Thread Paul E. McKenney
On Sun, Apr 01, 2018 at 02:11:08PM +0300, Yury Norov wrote:
> On Tue, Mar 27, 2018 at 11:21:17AM +0100, Will Deacon wrote:
> > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> > > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast 
> > > IPI.
> > > If CPU is in extended quiescent state (idle task or nohz_full userspace), 
> > > this
> > > work may be done at the exit of this state. Delaying synchronization 
> > > helps to
> > > save power if CPU is in idle state and decrease latency for real-time 
> > > tasks.
> > > 
> > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and 
> > > arm64
> > > code to delay syncronization.
> > > 
> > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU 
> > > running
> > > isolated task would be fatal, as it breaks isolation. The approach with 
> > > delaying
> > > of synchronization work helps to maintain isolated state.
> > > 
> > > I've tested it with test from task isolation series on ThunderX2 for more 
> > > than
> > > 10 hours (10k giga-ticks) without breaking isolation.
> > > 
> > > Signed-off-by: Yury Norov 
> > > ---
> > >  arch/arm64/kernel/insn.c |  2 +-
> > >  include/linux/smp.h  |  2 ++
> > >  kernel/smp.c | 24 
> > >  mm/slab.c|  2 +-
> > >  4 files changed, 28 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > > index 2718a77da165..9d7c492e920e 100644
> > > --- a/arch/arm64/kernel/insn.c
> > > +++ b/arch/arm64/kernel/insn.c
> > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], 
> > > u32 insns[], int cnt)
> > >* synchronization.
> > >*/
> > >   ret = aarch64_insn_patch_text_nosync(addrs[0], 
> > > insns[0]);
> > > - kick_all_cpus_sync();
> > > + kick_active_cpus_sync();
> > >   return ret;
> > >   }
> > >   }
> > 
> > I think this means that runtime modifications to the kernel text might not
> > be picked up by CPUs coming out of idle. Shouldn't we add an ISB on that
> > path to avoid executing stale instructions?
> > 
> > Will
> 
> commit 153ae9d5667e7baab4d48c48e8ec30fbcbd86f1e
> Author: Yury Norov 
> Date:   Sat Mar 31 15:05:23 2018 +0300
> 
> Hi Will, Paul,
> 
> On my system there are 3 paths that go thru rcu_dynticks_eqs_exit(),
> and so require isb().
> 
> First path starts at gic_handle_irq() on secondary_start_kernel stack.
> gic_handle_irq() already issues isb(), and so we can do nothing.
> 
> Second path starts at el0_svc entry; and third path is the exit from
> do_idle() on secondary_start_kernel stack.
> 
> For do_idle() path there is arch_cpu_idle_exit() hook that is not used by
> arm64 at now, so I picked it. And for el0_svc, I've introduced isb_if_eqs
> macro and call it at the beginning of el0_svc_naked.
> 
> I've tested it on ThunderX2 machine, and it works for me.
> 
> Below is my call traces and patch for them. If you OK with it, I think I'm
> ready to submit v2 (but maybe split this patch for better readability).

I must defer to Will on this one.

Thanx, Paul

> Yury
> 
> [  585.412095] Call trace:
> [  585.412097] [] dump_backtrace+0x0/0x380
> [  585.412099] [] show_stack+0x14/0x20
> [  585.412101] [] dump_stack+0x98/0xbc
> [  585.412104] [] rcu_dynticks_eqs_exit+0x68/0x70
> [  585.412105] [] rcu_irq_enter+0x48/0x50
> [  585.412106] [] irq_enter+0xc/0x70
> [  585.412108] [] __handle_domain_irq+0x3c/0x120
> [  585.412109] [] gic_handle_irq+0xc4/0x180
> [  585.412110] Exception stack(0xfc001130fe20 to 0xfc001130ff60)
> [  585.412112] fe20: 00a0  0001 
> 
> [  585.412113] fe40: 028f6f0b 0020 0013cd6f53963b31 
> 
> [  585.412144] fe60: 0002 fc001130fed0 0b80 
> 3400
> [  585.412146] fe80:  0001  
> 01db
> [  585.412147] fea0: fc0008247a78 03ff86dc61f8 0014 
> fc0008fc
> [  585.412149] fec0: fc00090143e8 fc0009014000 fc0008fc94a0 
> 
> [  585.412150] fee0:  fe8f46bb1700  
> 
> [  585.412152] ff00:  fc001130ff60 fc0008085034 
> fc001130ff60
> [  585.412153] ff20: fc0008085038 00400149 fc0009014000 
> fc0008fc94a0
> [  585.412155] ff40:   fc001130ff60 
> fc0008085038
> [  585.412156] [] el1_irq+0xb0/0x124
> [  585.412158] [] arch_cpu_idle+0x10/0x18
> [  585.412159] [] do_idle+0x10c/0x1d8
> [  585.412160] [] cpu_startup_entry+0x24/0x28
> [  585.412162] [] secondary_start_kernel+0x15c/0x1a0
> [  585.412164] CPU: 1 PID: 0 

Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-04-01 Thread Yury Norov
On Tue, Mar 27, 2018 at 11:21:17AM +0100, Will Deacon wrote:
> On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast 
> > IPI.
> > If CPU is in extended quiescent state (idle task or nohz_full userspace), 
> > this
> > work may be done at the exit of this state. Delaying synchronization helps 
> > to
> > save power if CPU is in idle state and decrease latency for real-time tasks.
> > 
> > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and 
> > arm64
> > code to delay syncronization.
> > 
> > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU 
> > running
> > isolated task would be fatal, as it breaks isolation. The approach with 
> > delaying
> > of synchronization work helps to maintain isolated state.
> > 
> > I've tested it with test from task isolation series on ThunderX2 for more 
> > than
> > 10 hours (10k giga-ticks) without breaking isolation.
> > 
> > Signed-off-by: Yury Norov 
> > ---
> >  arch/arm64/kernel/insn.c |  2 +-
> >  include/linux/smp.h  |  2 ++
> >  kernel/smp.c | 24 
> >  mm/slab.c|  2 +-
> >  4 files changed, 28 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > index 2718a77da165..9d7c492e920e 100644
> > --- a/arch/arm64/kernel/insn.c
> > +++ b/arch/arm64/kernel/insn.c
> > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], 
> > u32 insns[], int cnt)
> >  * synchronization.
> >  */
> > ret = aarch64_insn_patch_text_nosync(addrs[0], 
> > insns[0]);
> > -   kick_all_cpus_sync();
> > +   kick_active_cpus_sync();
> > return ret;
> > }
> > }
> 
> I think this means that runtime modifications to the kernel text might not
> be picked up by CPUs coming out of idle. Shouldn't we add an ISB on that
> path to avoid executing stale instructions?
> 
> Will

commit 153ae9d5667e7baab4d48c48e8ec30fbcbd86f1e
Author: Yury Norov 
Date:   Sat Mar 31 15:05:23 2018 +0300

Hi Will, Paul,

On my system there are 3 paths that go thru rcu_dynticks_eqs_exit(),
and so require isb().

First path starts at gic_handle_irq() on secondary_start_kernel stack.
gic_handle_irq() already issues isb(), and so we can do nothing.

Second path starts at el0_svc entry; and third path is the exit from
do_idle() on secondary_start_kernel stack.

For do_idle() path there is arch_cpu_idle_exit() hook that is not used by
arm64 at now, so I picked it. And for el0_svc, I've introduced isb_if_eqs
macro and call it at the beginning of el0_svc_naked.

I've tested it on ThunderX2 machine, and it works for me.

Below is my call traces and patch for them. If you OK with it, I think I'm
ready to submit v2 (but maybe split this patch for better readability).

Yury

[  585.412095] Call trace:
[  585.412097] [] dump_backtrace+0x0/0x380
[  585.412099] [] show_stack+0x14/0x20
[  585.412101] [] dump_stack+0x98/0xbc
[  585.412104] [] rcu_dynticks_eqs_exit+0x68/0x70
[  585.412105] [] rcu_irq_enter+0x48/0x50
[  585.412106] [] irq_enter+0xc/0x70
[  585.412108] [] __handle_domain_irq+0x3c/0x120
[  585.412109] [] gic_handle_irq+0xc4/0x180
[  585.412110] Exception stack(0xfc001130fe20 to 0xfc001130ff60)
[  585.412112] fe20: 00a0  0001 

[  585.412113] fe40: 028f6f0b 0020 0013cd6f53963b31 

[  585.412144] fe60: 0002 fc001130fed0 0b80 
3400
[  585.412146] fe80:  0001  
01db
[  585.412147] fea0: fc0008247a78 03ff86dc61f8 0014 
fc0008fc
[  585.412149] fec0: fc00090143e8 fc0009014000 fc0008fc94a0 

[  585.412150] fee0:  fe8f46bb1700  

[  585.412152] ff00:  fc001130ff60 fc0008085034 
fc001130ff60
[  585.412153] ff20: fc0008085038 00400149 fc0009014000 
fc0008fc94a0
[  585.412155] ff40:   fc001130ff60 
fc0008085038
[  585.412156] [] el1_irq+0xb0/0x124
[  585.412158] [] arch_cpu_idle+0x10/0x18
[  585.412159] [] do_idle+0x10c/0x1d8
[  585.412160] [] cpu_startup_entry+0x24/0x28
[  585.412162] [] secondary_start_kernel+0x15c/0x1a0
[  585.412164] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 
4.14.0-isolation-160735-g59b71c1-dirty #18

[  585.412058] Call trace:
[  585.412060] [] dump_backtrace+0x0/0x380
[  585.412062] [] show_stack+0x14/0x20
[  585.412064] [] dump_stack+0x98/0xbc
[  585.412066] [] rcu_dynticks_eqs_exit+0x68/0x70
[  585.412068] [] rcu_eqs_exit.isra.23+0x64/0x80
[  585.412069] [] rcu_user_exit+0xc/0x18
[  585.412071] [] 

Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-28 Thread Paul E. McKenney
On Wed, Mar 28, 2018 at 04:36:05PM +0300, Yury Norov wrote:
> On Mon, Mar 26, 2018 at 05:45:55AM -0700, Paul E. McKenney wrote:
> > On Sun, Mar 25, 2018 at 11:11:54PM +0300, Yury Norov wrote:
> > > On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote:
> > > > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> > > > > kick_all_cpus_sync() forces all CPUs to sync caches by sending 
> > > > > broadcast IPI.
> > > > > If CPU is in extended quiescent state (idle task or nohz_full 
> > > > > userspace), this
> > > > > work may be done at the exit of this state. Delaying synchronization 
> > > > > helps to
> > > > > save power if CPU is in idle state and decrease latency for real-time 
> > > > > tasks.
> > > > > 
> > > > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab 
> > > > > and arm64
> > > > > code to delay syncronization.
> > > > > 
> > > > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the 
> > > > > CPU running
> > > > > isolated task would be fatal, as it breaks isolation. The approach 
> > > > > with delaying
> > > > > of synchronization work helps to maintain isolated state.
> > > > > 
> > > > > I've tested it with test from task isolation series on ThunderX2 for 
> > > > > more than
> > > > > 10 hours (10k giga-ticks) without breaking isolation.
> > > > > 
> > > > > Signed-off-by: Yury Norov 
> > > > > ---
> > > > >  arch/arm64/kernel/insn.c |  2 +-
> > > > >  include/linux/smp.h  |  2 ++
> > > > >  kernel/smp.c | 24 
> > > > >  mm/slab.c|  2 +-
> > > > >  4 files changed, 28 insertions(+), 2 deletions(-)
> > > > > 
> > > > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > > > > index 2718a77da165..9d7c492e920e 100644
> > > > > --- a/arch/arm64/kernel/insn.c
> > > > > +++ b/arch/arm64/kernel/insn.c
> > > > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void 
> > > > > *addrs[], u32 insns[], int cnt)
> > > > >* synchronization.
> > > > >*/
> > > > >   ret = aarch64_insn_patch_text_nosync(addrs[0], 
> > > > > insns[0]);
> > > > > - kick_all_cpus_sync();
> > > > > + kick_active_cpus_sync();
> > > > >   return ret;
> > > > >   }
> > > > >   }
> > > > > diff --git a/include/linux/smp.h b/include/linux/smp.h
> > > > > index 9fb239e12b82..27215e22240d 100644
> > > > > --- a/include/linux/smp.h
> > > > > +++ b/include/linux/smp.h
> > > > > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask 
> > > > > *mask,
> > > > > smp_call_func_t func, void *info, int wait);
> > > > > 
> > > > >  void kick_all_cpus_sync(void);
> > > > > +void kick_active_cpus_sync(void);
> > > > >  void wake_up_all_idle_cpus(void);
> > > > > 
> > > > >  /*
> > > > > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, 
> > > > > smp_call_func_t func,
> > > > >  }
> > > > > 
> > > > >  static inline void kick_all_cpus_sync(void) {  }
> > > > > +static inline void kick_active_cpus_sync(void) {  }
> > > > >  static inline void wake_up_all_idle_cpus(void) {  }
> > > > > 
> > > > >  #ifdef CONFIG_UP_LATE_INIT
> > > > > diff --git a/kernel/smp.c b/kernel/smp.c
> > > > > index 084c8b3a2681..0358d6673850 100644
> > > > > --- a/kernel/smp.c
> > > > > +++ b/kernel/smp.c
> > > > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
> > > > >  }
> > > > >  EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
> > > > > 
> > > > > +/**
> > > > > + * kick_active_cpus_sync - Force CPUs that are not in extended
> > > > > + * quiescent state (idle or nohz_full userspace) sync by sending
> > > > > + * IPI. Extended quiescent state CPUs will sync at the exit of
> > > > > + * that state.
> > > > > + */
> > > > > +void kick_active_cpus_sync(void)
> > > > > +{
> > > > > + int cpu;
> > > > > + struct cpumask kernel_cpus;
> > > > > +
> > > > > + smp_mb();
> > > > > +
> > > > > + cpumask_clear(_cpus);
> > > > > + preempt_disable();
> > > > > + for_each_online_cpu(cpu) {
> > > > > + if (!rcu_eqs_special_set(cpu))
> > > > 
> > > > If we get here, the CPU is not in a quiescent state, so we therefore
> > > > must IPI it, correct?
> > > > 
> > > > But don't you also need to define rcu_eqs_special_exit() so that RCU
> > > > can invoke it when it next leaves its quiescent state?  Or are you able
> > > > to ignore the CPU in that case?  (If you are able to ignore the CPU in
> > > > that case, I could give you a lower-cost function to get your job done.)
> > > > 
> > > > Thanx, Paul
> > > 
> > > What's actually needed for synchronization is issuing memory barrier on 
> > > target
> > > CPUs before we start executing kernel code.
> > > 
> > > smp_mb() is implicitly called in smp_call_function*() path for it. In
> > > 

Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-28 Thread Paul E. McKenney
On Wed, Mar 28, 2018 at 05:41:40PM +0300, Yury Norov wrote:
> On Wed, Mar 28, 2018 at 06:56:17AM -0700, Paul E. McKenney wrote:
> > On Wed, Mar 28, 2018 at 04:36:05PM +0300, Yury Norov wrote:
> > > On Mon, Mar 26, 2018 at 05:45:55AM -0700, Paul E. McKenney wrote:
> > > > On Sun, Mar 25, 2018 at 11:11:54PM +0300, Yury Norov wrote:
> > > > > On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote:
> > > > > > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> > > > > > > kick_all_cpus_sync() forces all CPUs to sync caches by sending 
> > > > > > > broadcast IPI.
> > > > > > > If CPU is in extended quiescent state (idle task or nohz_full 
> > > > > > > userspace), this
> > > > > > > work may be done at the exit of this state. Delaying 
> > > > > > > synchronization helps to
> > > > > > > save power if CPU is in idle state and decrease latency for 
> > > > > > > real-time tasks.
> > > > > > > 
> > > > > > > This patch introduces kick_active_cpus_sync() and uses it in 
> > > > > > > mm/slab and arm64
> > > > > > > code to delay syncronization.
> > > > > > > 
> > > > > > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to 
> > > > > > > the CPU running
> > > > > > > isolated task would be fatal, as it breaks isolation. The 
> > > > > > > approach with delaying
> > > > > > > of synchronization work helps to maintain isolated state.
> > > > > > > 
> > > > > > > I've tested it with test from task isolation series on ThunderX2 
> > > > > > > for more than
> > > > > > > 10 hours (10k giga-ticks) without breaking isolation.
> > > > > > > 
> > > > > > > Signed-off-by: Yury Norov 
> > > > > > > ---
> > > > > > >  arch/arm64/kernel/insn.c |  2 +-
> > > > > > >  include/linux/smp.h  |  2 ++
> > > > > > >  kernel/smp.c | 24 
> > > > > > >  mm/slab.c|  2 +-
> > > > > > >  4 files changed, 28 insertions(+), 2 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > > > > > > index 2718a77da165..9d7c492e920e 100644
> > > > > > > --- a/arch/arm64/kernel/insn.c
> > > > > > > +++ b/arch/arm64/kernel/insn.c
> > > > > > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void 
> > > > > > > *addrs[], u32 insns[], int cnt)
> > > > > > >* synchronization.
> > > > > > >*/
> > > > > > >   ret = aarch64_insn_patch_text_nosync(addrs[0], 
> > > > > > > insns[0]);
> > > > > > > - kick_all_cpus_sync();
> > > > > > > + kick_active_cpus_sync();
> > > > > > >   return ret;
> > > > > > >   }
> > > > > > >   }
> > > > > > > diff --git a/include/linux/smp.h b/include/linux/smp.h
> > > > > > > index 9fb239e12b82..27215e22240d 100644
> > > > > > > --- a/include/linux/smp.h
> > > > > > > +++ b/include/linux/smp.h
> > > > > > > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct 
> > > > > > > cpumask *mask,
> > > > > > > smp_call_func_t func, void *info, int wait);
> > > > > > > 
> > > > > > >  void kick_all_cpus_sync(void);
> > > > > > > +void kick_active_cpus_sync(void);
> > > > > > >  void wake_up_all_idle_cpus(void);
> > > > > > > 
> > > > > > >  /*
> > > > > > > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask 
> > > > > > > *mask, smp_call_func_t func,
> > > > > > >  }
> > > > > > > 
> > > > > > >  static inline void kick_all_cpus_sync(void) {  }
> > > > > > > +static inline void kick_active_cpus_sync(void) {  }
> > > > > > >  static inline void wake_up_all_idle_cpus(void) {  }
> > > > > > > 
> > > > > > >  #ifdef CONFIG_UP_LATE_INIT
> > > > > > > diff --git a/kernel/smp.c b/kernel/smp.c
> > > > > > > index 084c8b3a2681..0358d6673850 100644
> > > > > > > --- a/kernel/smp.c
> > > > > > > +++ b/kernel/smp.c
> > > > > > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
> > > > > > >  }
> > > > > > >  EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
> > > > > > > 
> > > > > > > +/**
> > > > > > > + * kick_active_cpus_sync - Force CPUs that are not in extended
> > > > > > > + * quiescent state (idle or nohz_full userspace) sync by sending
> > > > > > > + * IPI. Extended quiescent state CPUs will sync at the exit of
> > > > > > > + * that state.
> > > > > > > + */
> > > > > > > +void kick_active_cpus_sync(void)
> > > > > > > +{
> > > > > > > + int cpu;
> > > > > > > + struct cpumask kernel_cpus;
> > > > > > > +
> > > > > > > + smp_mb();
> > > > > > > +
> > > > > > > + cpumask_clear(_cpus);
> > > > > > > + preempt_disable();
> > > > > > > + for_each_online_cpu(cpu) {
> > > > > > > + if (!rcu_eqs_special_set(cpu))
> > > > > > 
> > > > > > If we get here, the CPU is not in a quiescent state, so we therefore
> > > > > > must IPI it, correct?
> > > > > > 
> > > > > > But don't you also need to define rcu_eqs_special_exit() so that RCU
> > > > > > can invoke it when it next leaves its quiescent state?  Or are you 
> > > > > > 

Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-28 Thread Yury Norov
On Wed, Mar 28, 2018 at 06:56:17AM -0700, Paul E. McKenney wrote:
> On Wed, Mar 28, 2018 at 04:36:05PM +0300, Yury Norov wrote:
> > On Mon, Mar 26, 2018 at 05:45:55AM -0700, Paul E. McKenney wrote:
> > > On Sun, Mar 25, 2018 at 11:11:54PM +0300, Yury Norov wrote:
> > > > On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote:
> > > > > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> > > > > > kick_all_cpus_sync() forces all CPUs to sync caches by sending 
> > > > > > broadcast IPI.
> > > > > > If CPU is in extended quiescent state (idle task or nohz_full 
> > > > > > userspace), this
> > > > > > work may be done at the exit of this state. Delaying 
> > > > > > synchronization helps to
> > > > > > save power if CPU is in idle state and decrease latency for 
> > > > > > real-time tasks.
> > > > > > 
> > > > > > This patch introduces kick_active_cpus_sync() and uses it in 
> > > > > > mm/slab and arm64
> > > > > > code to delay syncronization.
> > > > > > 
> > > > > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to 
> > > > > > the CPU running
> > > > > > isolated task would be fatal, as it breaks isolation. The approach 
> > > > > > with delaying
> > > > > > of synchronization work helps to maintain isolated state.
> > > > > > 
> > > > > > I've tested it with test from task isolation series on ThunderX2 
> > > > > > for more than
> > > > > > 10 hours (10k giga-ticks) without breaking isolation.
> > > > > > 
> > > > > > Signed-off-by: Yury Norov 
> > > > > > ---
> > > > > >  arch/arm64/kernel/insn.c |  2 +-
> > > > > >  include/linux/smp.h  |  2 ++
> > > > > >  kernel/smp.c | 24 
> > > > > >  mm/slab.c|  2 +-
> > > > > >  4 files changed, 28 insertions(+), 2 deletions(-)
> > > > > > 
> > > > > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > > > > > index 2718a77da165..9d7c492e920e 100644
> > > > > > --- a/arch/arm64/kernel/insn.c
> > > > > > +++ b/arch/arm64/kernel/insn.c
> > > > > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void 
> > > > > > *addrs[], u32 insns[], int cnt)
> > > > > >  * synchronization.
> > > > > >  */
> > > > > > ret = aarch64_insn_patch_text_nosync(addrs[0], 
> > > > > > insns[0]);
> > > > > > -   kick_all_cpus_sync();
> > > > > > +   kick_active_cpus_sync();
> > > > > > return ret;
> > > > > > }
> > > > > > }
> > > > > > diff --git a/include/linux/smp.h b/include/linux/smp.h
> > > > > > index 9fb239e12b82..27215e22240d 100644
> > > > > > --- a/include/linux/smp.h
> > > > > > +++ b/include/linux/smp.h
> > > > > > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask 
> > > > > > *mask,
> > > > > >   smp_call_func_t func, void *info, int wait);
> > > > > > 
> > > > > >  void kick_all_cpus_sync(void);
> > > > > > +void kick_active_cpus_sync(void);
> > > > > >  void wake_up_all_idle_cpus(void);
> > > > > > 
> > > > > >  /*
> > > > > > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask 
> > > > > > *mask, smp_call_func_t func,
> > > > > >  }
> > > > > > 
> > > > > >  static inline void kick_all_cpus_sync(void) {  }
> > > > > > +static inline void kick_active_cpus_sync(void) {  }
> > > > > >  static inline void wake_up_all_idle_cpus(void) {  }
> > > > > > 
> > > > > >  #ifdef CONFIG_UP_LATE_INIT
> > > > > > diff --git a/kernel/smp.c b/kernel/smp.c
> > > > > > index 084c8b3a2681..0358d6673850 100644
> > > > > > --- a/kernel/smp.c
> > > > > > +++ b/kernel/smp.c
> > > > > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
> > > > > >  }
> > > > > >  EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
> > > > > > 
> > > > > > +/**
> > > > > > + * kick_active_cpus_sync - Force CPUs that are not in extended
> > > > > > + * quiescent state (idle or nohz_full userspace) sync by sending
> > > > > > + * IPI. Extended quiescent state CPUs will sync at the exit of
> > > > > > + * that state.
> > > > > > + */
> > > > > > +void kick_active_cpus_sync(void)
> > > > > > +{
> > > > > > +   int cpu;
> > > > > > +   struct cpumask kernel_cpus;
> > > > > > +
> > > > > > +   smp_mb();
> > > > > > +
> > > > > > +   cpumask_clear(_cpus);
> > > > > > +   preempt_disable();
> > > > > > +   for_each_online_cpu(cpu) {
> > > > > > +   if (!rcu_eqs_special_set(cpu))
> > > > > 
> > > > > If we get here, the CPU is not in a quiescent state, so we therefore
> > > > > must IPI it, correct?
> > > > > 
> > > > > But don't you also need to define rcu_eqs_special_exit() so that RCU
> > > > > can invoke it when it next leaves its quiescent state?  Or are you 
> > > > > able
> > > > > to ignore the CPU in that case?  (If you are able to ignore the CPU in
> > > > > that case, I could give you a lower-cost function to get your job 
> > > > > done.)
> > > > > 
> > > > >

Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-28 Thread Yury Norov
On Mon, Mar 26, 2018 at 05:45:55AM -0700, Paul E. McKenney wrote:
> On Sun, Mar 25, 2018 at 11:11:54PM +0300, Yury Norov wrote:
> > On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote:
> > > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> > > > kick_all_cpus_sync() forces all CPUs to sync caches by sending 
> > > > broadcast IPI.
> > > > If CPU is in extended quiescent state (idle task or nohz_full 
> > > > userspace), this
> > > > work may be done at the exit of this state. Delaying synchronization 
> > > > helps to
> > > > save power if CPU is in idle state and decrease latency for real-time 
> > > > tasks.
> > > > 
> > > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab 
> > > > and arm64
> > > > code to delay syncronization.
> > > > 
> > > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the 
> > > > CPU running
> > > > isolated task would be fatal, as it breaks isolation. The approach with 
> > > > delaying
> > > > of synchronization work helps to maintain isolated state.
> > > > 
> > > > I've tested it with test from task isolation series on ThunderX2 for 
> > > > more than
> > > > 10 hours (10k giga-ticks) without breaking isolation.
> > > > 
> > > > Signed-off-by: Yury Norov 
> > > > ---
> > > >  arch/arm64/kernel/insn.c |  2 +-
> > > >  include/linux/smp.h  |  2 ++
> > > >  kernel/smp.c | 24 
> > > >  mm/slab.c|  2 +-
> > > >  4 files changed, 28 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > > > index 2718a77da165..9d7c492e920e 100644
> > > > --- a/arch/arm64/kernel/insn.c
> > > > +++ b/arch/arm64/kernel/insn.c
> > > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void 
> > > > *addrs[], u32 insns[], int cnt)
> > > >  * synchronization.
> > > >  */
> > > > ret = aarch64_insn_patch_text_nosync(addrs[0], 
> > > > insns[0]);
> > > > -   kick_all_cpus_sync();
> > > > +   kick_active_cpus_sync();
> > > > return ret;
> > > > }
> > > > }
> > > > diff --git a/include/linux/smp.h b/include/linux/smp.h
> > > > index 9fb239e12b82..27215e22240d 100644
> > > > --- a/include/linux/smp.h
> > > > +++ b/include/linux/smp.h
> > > > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask 
> > > > *mask,
> > > >   smp_call_func_t func, void *info, int wait);
> > > > 
> > > >  void kick_all_cpus_sync(void);
> > > > +void kick_active_cpus_sync(void);
> > > >  void wake_up_all_idle_cpus(void);
> > > > 
> > > >  /*
> > > > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, 
> > > > smp_call_func_t func,
> > > >  }
> > > > 
> > > >  static inline void kick_all_cpus_sync(void) {  }
> > > > +static inline void kick_active_cpus_sync(void) {  }
> > > >  static inline void wake_up_all_idle_cpus(void) {  }
> > > > 
> > > >  #ifdef CONFIG_UP_LATE_INIT
> > > > diff --git a/kernel/smp.c b/kernel/smp.c
> > > > index 084c8b3a2681..0358d6673850 100644
> > > > --- a/kernel/smp.c
> > > > +++ b/kernel/smp.c
> > > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
> > > >  }
> > > >  EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
> > > > 
> > > > +/**
> > > > + * kick_active_cpus_sync - Force CPUs that are not in extended
> > > > + * quiescent state (idle or nohz_full userspace) sync by sending
> > > > + * IPI. Extended quiescent state CPUs will sync at the exit of
> > > > + * that state.
> > > > + */
> > > > +void kick_active_cpus_sync(void)
> > > > +{
> > > > +   int cpu;
> > > > +   struct cpumask kernel_cpus;
> > > > +
> > > > +   smp_mb();
> > > > +
> > > > +   cpumask_clear(_cpus);
> > > > +   preempt_disable();
> > > > +   for_each_online_cpu(cpu) {
> > > > +   if (!rcu_eqs_special_set(cpu))
> > > 
> > > If we get here, the CPU is not in a quiescent state, so we therefore
> > > must IPI it, correct?
> > > 
> > > But don't you also need to define rcu_eqs_special_exit() so that RCU
> > > can invoke it when it next leaves its quiescent state?  Or are you able
> > > to ignore the CPU in that case?  (If you are able to ignore the CPU in
> > > that case, I could give you a lower-cost function to get your job done.)
> > > 
> > >   Thanx, Paul
> > 
> > What's actually needed for synchronization is issuing memory barrier on 
> > target
> > CPUs before we start executing kernel code.
> > 
> > smp_mb() is implicitly called in smp_call_function*() path for it. In
> > rcu_eqs_special_set() -> rcu_dynticks_eqs_exit() path, 
> > smp_mb__after_atomic()
> > is called just before rcu_eqs_special_exit().
> > 
> > So I think, rcu_eqs_special_exit() may be left untouched. Empty
> > rcu_eqs_special_exit() in new RCU path corresponds empty 

Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-28 Thread Yury Norov
On Mon, Mar 26, 2018 at 02:57:35PM -0400, Steven Rostedt wrote:
> On Mon, 26 Mar 2018 10:53:13 +0200
> Andrea Parri  wrote:
> 
> > > --- a/kernel/smp.c
> > > +++ b/kernel/smp.c
> > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
> > >  }
> > >  EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
> > >  
> > > +/**
> > > + * kick_active_cpus_sync - Force CPUs that are not in extended
> > > + * quiescent state (idle or nohz_full userspace) sync by sending
> > > + * IPI. Extended quiescent state CPUs will sync at the exit of
> > > + * that state.
> > > + */
> > > +void kick_active_cpus_sync(void)
> > > +{
> > > + int cpu;
> > > + struct cpumask kernel_cpus;
> > > +
> > > + smp_mb();  
> > 
> > (A general remark only:)
> > 
> > checkpatch.pl should have warned about the fact that this barrier is
> > missing an accompanying comment (which accesses are being "ordered",
> > what is the pairing barrier, etc.).
> 
> He could have simply copied the comment above the smp_mb() for
> kick_all_cpus_sync():
> 
>   /* Make sure the change is visible before we kick the cpus */
> 
> The kick itself is pretty much a synchronization primitive.
> 
> That is, you make some changes and then you need all CPUs to see it,
> and you call: kick_active_cpus_synch(), which is the barrier to make
> sure you previous changes are seen on all CPUS before you proceed
> further. Note, the matching barrier is implicit in the IPI itself.
>
>  -- Steve

I know that I had to copy the comment from kick_all_cpus_sync(), but I
don't like copy-pasting in general, and as Steven told, this smp_mb() is
already inside synchronization routine, so we may hope that users of
kick_*_cpus_sync() will explain better what for they need it...
 
> 
> > 
> > Moreover if, as your reply above suggested, your patch is relying on
> > "implicit barriers" (something I would not recommend) then even more
> > so you should comment on these requirements.
> > 
> > This could: (a) force you to reason about the memory ordering stuff,
> > (b) easy the task of reviewing and adopting your patch, (c) easy the
> > task of preserving those requirements (as implementations changes).
> > 
> >   Andrea

I need v2 anyway, and I will add comments to address all questions in this
thread.

I also hope that we'll agree that for powerpc it's also safe to delay
synchronization, and if so, we will have no users of kick_all_cpus_sync(),
and can drop it.

(It looks like this, because nohz_full userspace CPU cannot have pending
IPIs, but I'd like to get confirmation from powerpc people.)

Would it make sense to rename kick_all_cpus_sync() to smp_mb_sync(), which
would stand for 'synchronous memory barrier on all online CPUs'?

Yury


Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-28 Thread Yury Norov
On Tue, Mar 27, 2018 at 11:21:17AM +0100, Will Deacon wrote:
> On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast 
> > IPI.
> > If CPU is in extended quiescent state (idle task or nohz_full userspace), 
> > this
> > work may be done at the exit of this state. Delaying synchronization helps 
> > to
> > save power if CPU is in idle state and decrease latency for real-time tasks.
> > 
> > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and 
> > arm64
> > code to delay syncronization.
> > 
> > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU 
> > running
> > isolated task would be fatal, as it breaks isolation. The approach with 
> > delaying
> > of synchronization work helps to maintain isolated state.
> > 
> > I've tested it with test from task isolation series on ThunderX2 for more 
> > than
> > 10 hours (10k giga-ticks) without breaking isolation.
> > 
> > Signed-off-by: Yury Norov 
> > ---
> >  arch/arm64/kernel/insn.c |  2 +-
> >  include/linux/smp.h  |  2 ++
> >  kernel/smp.c | 24 
> >  mm/slab.c|  2 +-
> >  4 files changed, 28 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > index 2718a77da165..9d7c492e920e 100644
> > --- a/arch/arm64/kernel/insn.c
> > +++ b/arch/arm64/kernel/insn.c
> > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], 
> > u32 insns[], int cnt)
> >  * synchronization.
> >  */
> > ret = aarch64_insn_patch_text_nosync(addrs[0], 
> > insns[0]);
> > -   kick_all_cpus_sync();
> > +   kick_active_cpus_sync();
> > return ret;
> > }
> > }
> 
> I think this means that runtime modifications to the kernel text might not
> be picked up by CPUs coming out of idle. Shouldn't we add an ISB on that
> path to avoid executing stale instructions?

Thanks, Will, for the hint. I'll do that.

Yury


Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-27 Thread Will Deacon
On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI.
> If CPU is in extended quiescent state (idle task or nohz_full userspace), this
> work may be done at the exit of this state. Delaying synchronization helps to
> save power if CPU is in idle state and decrease latency for real-time tasks.
> 
> This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64
> code to delay syncronization.
> 
> For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU 
> running
> isolated task would be fatal, as it breaks isolation. The approach with 
> delaying
> of synchronization work helps to maintain isolated state.
> 
> I've tested it with test from task isolation series on ThunderX2 for more than
> 10 hours (10k giga-ticks) without breaking isolation.
> 
> Signed-off-by: Yury Norov 
> ---
>  arch/arm64/kernel/insn.c |  2 +-
>  include/linux/smp.h  |  2 ++
>  kernel/smp.c | 24 
>  mm/slab.c|  2 +-
>  4 files changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 2718a77da165..9d7c492e920e 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 
> insns[], int cnt)
>* synchronization.
>*/
>   ret = aarch64_insn_patch_text_nosync(addrs[0], 
> insns[0]);
> - kick_all_cpus_sync();
> + kick_active_cpus_sync();
>   return ret;
>   }
>   }

I think this means that runtime modifications to the kernel text might not
be picked up by CPUs coming out of idle. Shouldn't we add an ISB on that
path to avoid executing stale instructions?

Will


Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-26 Thread Steven Rostedt
On Mon, 26 Mar 2018 10:53:13 +0200
Andrea Parri  wrote:

> > --- a/kernel/smp.c
> > +++ b/kernel/smp.c
> > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
> >  }
> >  EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
> >  
> > +/**
> > + * kick_active_cpus_sync - Force CPUs that are not in extended
> > + * quiescent state (idle or nohz_full userspace) sync by sending
> > + * IPI. Extended quiescent state CPUs will sync at the exit of
> > + * that state.
> > + */
> > +void kick_active_cpus_sync(void)
> > +{
> > +   int cpu;
> > +   struct cpumask kernel_cpus;
> > +
> > +   smp_mb();  
> 
> (A general remark only:)
> 
> checkpatch.pl should have warned about the fact that this barrier is
> missing an accompanying comment (which accesses are being "ordered",
> what is the pairing barrier, etc.).

He could have simply copied the comment above the smp_mb() for
kick_all_cpus_sync():

/* Make sure the change is visible before we kick the cpus */

The kick itself is pretty much a synchronization primitive.

That is, you make some changes and then you need all CPUs to see it,
and you call: kick_active_cpus_synch(), which is the barrier to make
sure you previous changes are seen on all CPUS before you proceed
further. Note, the matching barrier is implicit in the IPI itself.

-- Steve


> 
> Moreover if, as your reply above suggested, your patch is relying on
> "implicit barriers" (something I would not recommend) then even more
> so you should comment on these requirements.
> 
> This could: (a) force you to reason about the memory ordering stuff,
> (b) easy the task of reviewing and adopting your patch, (c) easy the
> task of preserving those requirements (as implementations changes).
> 
>   Andrea
> 
> 
> > +
> > +   cpumask_clear(_cpus);
> > +   preempt_disable();
> > +   for_each_online_cpu(cpu) {
> > +   if (!rcu_eqs_special_set(cpu))
> > +   cpumask_set_cpu(cpu, _cpus);
> > +   }
> > +   smp_call_function_many(_cpus, do_nothing, NULL, 1);
> > +   preempt_enable();
> > +}
> > +EXPORT_SYMBOL_GPL(kick_active_cpus_sync);
> > +
> >  /**
> >   * wake_up_all_idle_cpus - break all cpus out of idle
> >   * wake_up_all_idle_cpus try to break all cpus which is in idle state even
> > diff --git a/mm/slab.c b/mm/slab.c
> > index 324446621b3e..678d5dbd6f46 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -3856,7 +3856,7 @@ static int __do_tune_cpucache(struct kmem_cache 
> > *cachep, int limit,
> >  * cpus, so skip the IPIs.
> >  */
> > if (prev)
> > -   kick_all_cpus_sync();
> > +   kick_active_cpus_sync();
> >  
> > check_irq_on();
> > cachep->batchcount = batchcount;
> > -- 
> > 2.14.1
> >   



Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-26 Thread Paul E. McKenney
On Sun, Mar 25, 2018 at 11:11:54PM +0300, Yury Norov wrote:
> On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote:
> > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> > > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast 
> > > IPI.
> > > If CPU is in extended quiescent state (idle task or nohz_full userspace), 
> > > this
> > > work may be done at the exit of this state. Delaying synchronization 
> > > helps to
> > > save power if CPU is in idle state and decrease latency for real-time 
> > > tasks.
> > > 
> > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and 
> > > arm64
> > > code to delay syncronization.
> > > 
> > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU 
> > > running
> > > isolated task would be fatal, as it breaks isolation. The approach with 
> > > delaying
> > > of synchronization work helps to maintain isolated state.
> > > 
> > > I've tested it with test from task isolation series on ThunderX2 for more 
> > > than
> > > 10 hours (10k giga-ticks) without breaking isolation.
> > > 
> > > Signed-off-by: Yury Norov 
> > > ---
> > >  arch/arm64/kernel/insn.c |  2 +-
> > >  include/linux/smp.h  |  2 ++
> > >  kernel/smp.c | 24 
> > >  mm/slab.c|  2 +-
> > >  4 files changed, 28 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > > index 2718a77da165..9d7c492e920e 100644
> > > --- a/arch/arm64/kernel/insn.c
> > > +++ b/arch/arm64/kernel/insn.c
> > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], 
> > > u32 insns[], int cnt)
> > >* synchronization.
> > >*/
> > >   ret = aarch64_insn_patch_text_nosync(addrs[0], 
> > > insns[0]);
> > > - kick_all_cpus_sync();
> > > + kick_active_cpus_sync();
> > >   return ret;
> > >   }
> > >   }
> > > diff --git a/include/linux/smp.h b/include/linux/smp.h
> > > index 9fb239e12b82..27215e22240d 100644
> > > --- a/include/linux/smp.h
> > > +++ b/include/linux/smp.h
> > > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask,
> > > smp_call_func_t func, void *info, int wait);
> > > 
> > >  void kick_all_cpus_sync(void);
> > > +void kick_active_cpus_sync(void);
> > >  void wake_up_all_idle_cpus(void);
> > > 
> > >  /*
> > > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, 
> > > smp_call_func_t func,
> > >  }
> > > 
> > >  static inline void kick_all_cpus_sync(void) {  }
> > > +static inline void kick_active_cpus_sync(void) {  }
> > >  static inline void wake_up_all_idle_cpus(void) {  }
> > > 
> > >  #ifdef CONFIG_UP_LATE_INIT
> > > diff --git a/kernel/smp.c b/kernel/smp.c
> > > index 084c8b3a2681..0358d6673850 100644
> > > --- a/kernel/smp.c
> > > +++ b/kernel/smp.c
> > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
> > >  }
> > >  EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
> > > 
> > > +/**
> > > + * kick_active_cpus_sync - Force CPUs that are not in extended
> > > + * quiescent state (idle or nohz_full userspace) sync by sending
> > > + * IPI. Extended quiescent state CPUs will sync at the exit of
> > > + * that state.
> > > + */
> > > +void kick_active_cpus_sync(void)
> > > +{
> > > + int cpu;
> > > + struct cpumask kernel_cpus;
> > > +
> > > + smp_mb();
> > > +
> > > + cpumask_clear(_cpus);
> > > + preempt_disable();
> > > + for_each_online_cpu(cpu) {
> > > + if (!rcu_eqs_special_set(cpu))
> > 
> > If we get here, the CPU is not in a quiescent state, so we therefore
> > must IPI it, correct?
> > 
> > But don't you also need to define rcu_eqs_special_exit() so that RCU
> > can invoke it when it next leaves its quiescent state?  Or are you able
> > to ignore the CPU in that case?  (If you are able to ignore the CPU in
> > that case, I could give you a lower-cost function to get your job done.)
> > 
> > Thanx, Paul
> 
> What's actually needed for synchronization is issuing memory barrier on target
> CPUs before we start executing kernel code.
> 
> smp_mb() is implicitly called in smp_call_function*() path for it. In
> rcu_eqs_special_set() -> rcu_dynticks_eqs_exit() path, smp_mb__after_atomic()
> is called just before rcu_eqs_special_exit().
> 
> So I think, rcu_eqs_special_exit() may be left untouched. Empty
> rcu_eqs_special_exit() in new RCU path corresponds empty do_nothing() in old
> IPI path.
> 
> Or my understanding of smp_mb__after_atomic() is wrong? By default, 
> smp_mb__after_atomic() is just alias to smp_mb(). But some
> architectures define it differently. x86, for example, aliases it to
> just barrier() with a comment: "Atomic operations are already
> serializing on x86".
> 
> I was initially thinking that it's also fine to leave
> rcu_eqs_special_exit() 

Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-26 Thread Andrea Parri
Hi Yury,

On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI.
> If CPU is in extended quiescent state (idle task or nohz_full userspace), this
> work may be done at the exit of this state. Delaying synchronization helps to
> save power if CPU is in idle state and decrease latency for real-time tasks.
> 
> This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64
> code to delay syncronization.
> 
> For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU 
> running
> isolated task would be fatal, as it breaks isolation. The approach with 
> delaying
> of synchronization work helps to maintain isolated state.
> 
> I've tested it with test from task isolation series on ThunderX2 for more than
> 10 hours (10k giga-ticks) without breaking isolation.
> 
> Signed-off-by: Yury Norov 
> ---
>  arch/arm64/kernel/insn.c |  2 +-
>  include/linux/smp.h  |  2 ++
>  kernel/smp.c | 24 
>  mm/slab.c|  2 +-
>  4 files changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 2718a77da165..9d7c492e920e 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 
> insns[], int cnt)
>* synchronization.
>*/
>   ret = aarch64_insn_patch_text_nosync(addrs[0], 
> insns[0]);
> - kick_all_cpus_sync();
> + kick_active_cpus_sync();
>   return ret;
>   }
>   }
> diff --git a/include/linux/smp.h b/include/linux/smp.h
> index 9fb239e12b82..27215e22240d 100644
> --- a/include/linux/smp.h
> +++ b/include/linux/smp.h
> @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask,
> smp_call_func_t func, void *info, int wait);
>  
>  void kick_all_cpus_sync(void);
> +void kick_active_cpus_sync(void);
>  void wake_up_all_idle_cpus(void);
>  
>  /*
> @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, 
> smp_call_func_t func,
>  }
>  
>  static inline void kick_all_cpus_sync(void) {  }
> +static inline void kick_active_cpus_sync(void) {  }
>  static inline void wake_up_all_idle_cpus(void) {  }
>  
>  #ifdef CONFIG_UP_LATE_INIT
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 084c8b3a2681..0358d6673850 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
>  }
>  EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
>  
> +/**
> + * kick_active_cpus_sync - Force CPUs that are not in extended
> + * quiescent state (idle or nohz_full userspace) sync by sending
> + * IPI. Extended quiescent state CPUs will sync at the exit of
> + * that state.
> + */
> +void kick_active_cpus_sync(void)
> +{
> + int cpu;
> + struct cpumask kernel_cpus;
> +
> + smp_mb();

(A general remark only:)

checkpatch.pl should have warned about the fact that this barrier is
missing an accompanying comment (which accesses are being "ordered",
what is the pairing barrier, etc.).

Moreover if, as your reply above suggested, your patch is relying on
"implicit barriers" (something I would not recommend) then even more
so you should comment on these requirements.

This could: (a) force you to reason about the memory ordering stuff,
(b) easy the task of reviewing and adopting your patch, (c) easy the
task of preserving those requirements (as implementations changes).

  Andrea


> +
> + cpumask_clear(_cpus);
> + preempt_disable();
> + for_each_online_cpu(cpu) {
> + if (!rcu_eqs_special_set(cpu))
> + cpumask_set_cpu(cpu, _cpus);
> + }
> + smp_call_function_many(_cpus, do_nothing, NULL, 1);
> + preempt_enable();
> +}
> +EXPORT_SYMBOL_GPL(kick_active_cpus_sync);
> +
>  /**
>   * wake_up_all_idle_cpus - break all cpus out of idle
>   * wake_up_all_idle_cpus try to break all cpus which is in idle state even
> diff --git a/mm/slab.c b/mm/slab.c
> index 324446621b3e..678d5dbd6f46 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -3856,7 +3856,7 @@ static int __do_tune_cpucache(struct kmem_cache 
> *cachep, int limit,
>* cpus, so skip the IPIs.
>*/
>   if (prev)
> - kick_all_cpus_sync();
> + kick_active_cpus_sync();
>  
>   check_irq_on();
>   cachep->batchcount = batchcount;
> -- 
> 2.14.1
> 


Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-25 Thread Yury Norov
On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote:
> On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast 
> > IPI.
> > If CPU is in extended quiescent state (idle task or nohz_full userspace), 
> > this
> > work may be done at the exit of this state. Delaying synchronization helps 
> > to
> > save power if CPU is in idle state and decrease latency for real-time tasks.
> > 
> > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and 
> > arm64
> > code to delay syncronization.
> > 
> > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU 
> > running
> > isolated task would be fatal, as it breaks isolation. The approach with 
> > delaying
> > of synchronization work helps to maintain isolated state.
> > 
> > I've tested it with test from task isolation series on ThunderX2 for more 
> > than
> > 10 hours (10k giga-ticks) without breaking isolation.
> > 
> > Signed-off-by: Yury Norov 
> > ---
> >  arch/arm64/kernel/insn.c |  2 +-
> >  include/linux/smp.h  |  2 ++
> >  kernel/smp.c | 24 
> >  mm/slab.c|  2 +-
> >  4 files changed, 28 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > index 2718a77da165..9d7c492e920e 100644
> > --- a/arch/arm64/kernel/insn.c
> > +++ b/arch/arm64/kernel/insn.c
> > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], 
> > u32 insns[], int cnt)
> >  * synchronization.
> >  */
> > ret = aarch64_insn_patch_text_nosync(addrs[0], 
> > insns[0]);
> > -   kick_all_cpus_sync();
> > +   kick_active_cpus_sync();
> > return ret;
> > }
> > }
> > diff --git a/include/linux/smp.h b/include/linux/smp.h
> > index 9fb239e12b82..27215e22240d 100644
> > --- a/include/linux/smp.h
> > +++ b/include/linux/smp.h
> > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask,
> >   smp_call_func_t func, void *info, int wait);
> > 
> >  void kick_all_cpus_sync(void);
> > +void kick_active_cpus_sync(void);
> >  void wake_up_all_idle_cpus(void);
> > 
> >  /*
> > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, 
> > smp_call_func_t func,
> >  }
> > 
> >  static inline void kick_all_cpus_sync(void) {  }
> > +static inline void kick_active_cpus_sync(void) {  }
> >  static inline void wake_up_all_idle_cpus(void) {  }
> > 
> >  #ifdef CONFIG_UP_LATE_INIT
> > diff --git a/kernel/smp.c b/kernel/smp.c
> > index 084c8b3a2681..0358d6673850 100644
> > --- a/kernel/smp.c
> > +++ b/kernel/smp.c
> > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
> >  }
> >  EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
> > 
> > +/**
> > + * kick_active_cpus_sync - Force CPUs that are not in extended
> > + * quiescent state (idle or nohz_full userspace) sync by sending
> > + * IPI. Extended quiescent state CPUs will sync at the exit of
> > + * that state.
> > + */
> > +void kick_active_cpus_sync(void)
> > +{
> > +   int cpu;
> > +   struct cpumask kernel_cpus;
> > +
> > +   smp_mb();
> > +
> > +   cpumask_clear(_cpus);
> > +   preempt_disable();
> > +   for_each_online_cpu(cpu) {
> > +   if (!rcu_eqs_special_set(cpu))
> 
> If we get here, the CPU is not in a quiescent state, so we therefore
> must IPI it, correct?
> 
> But don't you also need to define rcu_eqs_special_exit() so that RCU
> can invoke it when it next leaves its quiescent state?  Or are you able
> to ignore the CPU in that case?  (If you are able to ignore the CPU in
> that case, I could give you a lower-cost function to get your job done.)
> 
>   Thanx, Paul

What's actually needed for synchronization is issuing memory barrier on target
CPUs before we start executing kernel code.

smp_mb() is implicitly called in smp_call_function*() path for it. In
rcu_eqs_special_set() -> rcu_dynticks_eqs_exit() path, smp_mb__after_atomic()
is called just before rcu_eqs_special_exit().

So I think, rcu_eqs_special_exit() may be left untouched. Empty
rcu_eqs_special_exit() in new RCU path corresponds empty do_nothing() in old
IPI path.

Or my understanding of smp_mb__after_atomic() is wrong? By default, 
smp_mb__after_atomic() is just alias to smp_mb(). But some
architectures define it differently. x86, for example, aliases it to
just barrier() with a comment: "Atomic operations are already
serializing on x86".

I was initially thinking that it's also fine to leave
rcu_eqs_special_exit() empty in this case, but now I'm not sure...

Anyway, answering to your question, we shouldn't ignore quiescent
CPUs, and rcu_eqs_special_set() path is really needed as it issues
memory barrier on them.

Yury

> > +   cpumask_set_cpu(cpu, _cpus);
> > +   }
> > +  

Re: [PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-25 Thread Paul E. McKenney
On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote:
> kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI.
> If CPU is in extended quiescent state (idle task or nohz_full userspace), this
> work may be done at the exit of this state. Delaying synchronization helps to
> save power if CPU is in idle state and decrease latency for real-time tasks.
> 
> This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64
> code to delay syncronization.
> 
> For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU 
> running
> isolated task would be fatal, as it breaks isolation. The approach with 
> delaying
> of synchronization work helps to maintain isolated state.
> 
> I've tested it with test from task isolation series on ThunderX2 for more than
> 10 hours (10k giga-ticks) without breaking isolation.
> 
> Signed-off-by: Yury Norov 
> ---
>  arch/arm64/kernel/insn.c |  2 +-
>  include/linux/smp.h  |  2 ++
>  kernel/smp.c | 24 
>  mm/slab.c|  2 +-
>  4 files changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 2718a77da165..9d7c492e920e 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 
> insns[], int cnt)
>* synchronization.
>*/
>   ret = aarch64_insn_patch_text_nosync(addrs[0], 
> insns[0]);
> - kick_all_cpus_sync();
> + kick_active_cpus_sync();
>   return ret;
>   }
>   }
> diff --git a/include/linux/smp.h b/include/linux/smp.h
> index 9fb239e12b82..27215e22240d 100644
> --- a/include/linux/smp.h
> +++ b/include/linux/smp.h
> @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask,
> smp_call_func_t func, void *info, int wait);
> 
>  void kick_all_cpus_sync(void);
> +void kick_active_cpus_sync(void);
>  void wake_up_all_idle_cpus(void);
> 
>  /*
> @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, 
> smp_call_func_t func,
>  }
> 
>  static inline void kick_all_cpus_sync(void) {  }
> +static inline void kick_active_cpus_sync(void) {  }
>  static inline void wake_up_all_idle_cpus(void) {  }
> 
>  #ifdef CONFIG_UP_LATE_INIT
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 084c8b3a2681..0358d6673850 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
>  }
>  EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
> 
> +/**
> + * kick_active_cpus_sync - Force CPUs that are not in extended
> + * quiescent state (idle or nohz_full userspace) sync by sending
> + * IPI. Extended quiescent state CPUs will sync at the exit of
> + * that state.
> + */
> +void kick_active_cpus_sync(void)
> +{
> + int cpu;
> + struct cpumask kernel_cpus;
> +
> + smp_mb();
> +
> + cpumask_clear(_cpus);
> + preempt_disable();
> + for_each_online_cpu(cpu) {
> + if (!rcu_eqs_special_set(cpu))

If we get here, the CPU is not in a quiescent state, so we therefore
must IPI it, correct?

But don't you also need to define rcu_eqs_special_exit() so that RCU
can invoke it when it next leaves its quiescent state?  Or are you able
to ignore the CPU in that case?  (If you are able to ignore the CPU in
that case, I could give you a lower-cost function to get your job done.)

Thanx, Paul

> + cpumask_set_cpu(cpu, _cpus);
> + }
> + smp_call_function_many(_cpus, do_nothing, NULL, 1);
> + preempt_enable();
> +}
> +EXPORT_SYMBOL_GPL(kick_active_cpus_sync);
> +
>  /**
>   * wake_up_all_idle_cpus - break all cpus out of idle
>   * wake_up_all_idle_cpus try to break all cpus which is in idle state even
> diff --git a/mm/slab.c b/mm/slab.c
> index 324446621b3e..678d5dbd6f46 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -3856,7 +3856,7 @@ static int __do_tune_cpucache(struct kmem_cache 
> *cachep, int limit,
>* cpus, so skip the IPIs.
>*/
>   if (prev)
> - kick_all_cpus_sync();
> + kick_active_cpus_sync();
> 
>   check_irq_on();
>   cachep->batchcount = batchcount;
> -- 
> 2.14.1
> 



[PATCH 2/2] smp: introduce kick_active_cpus_sync()

2018-03-25 Thread Yury Norov
kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI.
If CPU is in extended quiescent state (idle task or nohz_full userspace), this
work may be done at the exit of this state. Delaying synchronization helps to
save power if CPU is in idle state and decrease latency for real-time tasks.

This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64
code to delay syncronization.

For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running
isolated task would be fatal, as it breaks isolation. The approach with delaying
of synchronization work helps to maintain isolated state.

I've tested it with test from task isolation series on ThunderX2 for more than
10 hours (10k giga-ticks) without breaking isolation.

Signed-off-by: Yury Norov 
---
 arch/arm64/kernel/insn.c |  2 +-
 include/linux/smp.h  |  2 ++
 kernel/smp.c | 24 
 mm/slab.c|  2 +-
 4 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 2718a77da165..9d7c492e920e 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 
insns[], int cnt)
 * synchronization.
 */
ret = aarch64_insn_patch_text_nosync(addrs[0], 
insns[0]);
-   kick_all_cpus_sync();
+   kick_active_cpus_sync();
return ret;
}
}
diff --git a/include/linux/smp.h b/include/linux/smp.h
index 9fb239e12b82..27215e22240d 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask,
  smp_call_func_t func, void *info, int wait);
 
 void kick_all_cpus_sync(void);
+void kick_active_cpus_sync(void);
 void wake_up_all_idle_cpus(void);
 
 /*
@@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, 
smp_call_func_t func,
 }
 
 static inline void kick_all_cpus_sync(void) {  }
+static inline void kick_active_cpus_sync(void) {  }
 static inline void wake_up_all_idle_cpus(void) {  }
 
 #ifdef CONFIG_UP_LATE_INIT
diff --git a/kernel/smp.c b/kernel/smp.c
index 084c8b3a2681..0358d6673850 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -724,6 +724,30 @@ void kick_all_cpus_sync(void)
 }
 EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
 
+/**
+ * kick_active_cpus_sync - Force CPUs that are not in extended
+ * quiescent state (idle or nohz_full userspace) sync by sending
+ * IPI. Extended quiescent state CPUs will sync at the exit of
+ * that state.
+ */
+void kick_active_cpus_sync(void)
+{
+   int cpu;
+   struct cpumask kernel_cpus;
+
+   smp_mb();
+
+   cpumask_clear(_cpus);
+   preempt_disable();
+   for_each_online_cpu(cpu) {
+   if (!rcu_eqs_special_set(cpu))
+   cpumask_set_cpu(cpu, _cpus);
+   }
+   smp_call_function_many(_cpus, do_nothing, NULL, 1);
+   preempt_enable();
+}
+EXPORT_SYMBOL_GPL(kick_active_cpus_sync);
+
 /**
  * wake_up_all_idle_cpus - break all cpus out of idle
  * wake_up_all_idle_cpus try to break all cpus which is in idle state even
diff --git a/mm/slab.c b/mm/slab.c
index 324446621b3e..678d5dbd6f46 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3856,7 +3856,7 @@ static int __do_tune_cpucache(struct kmem_cache *cachep, 
int limit,
 * cpus, so skip the IPIs.
 */
if (prev)
-   kick_all_cpus_sync();
+   kick_active_cpus_sync();
 
check_irq_on();
cachep->batchcount = batchcount;
-- 
2.14.1