On Thu, Nov 29, 2018 at 09:42:56AM -0500, Konrad Rzeszutek Wilk wrote:
> On Sun, Nov 25, 2018 at 07:33:36PM +0100, Thomas Gleixner wrote:
> > Currently the 'sched_smt_present' static key is enabled when at CPU bringup
> > SMT topology is observed, but it is never disabled. However there is demand
> > to also disable the key when the topology changes such that there is no SMT
> > present anymore.
> > 
> > Implement this by making the key count the number of cores that have SMT
> > enabled.
> > 
> > In particular, the SMT topology bits are set before interrrupts are enabled
> > and similarly, are cleared after interrupts are disabled for the last time
> > and the CPU dies.
> 
> I see that the number you used is '2', but I thought that there are some
> CPUs out there (Knights Landing?) that could have four threads?
> 
> Would it be better to have a generic function that would provide the
> amount of threads the platform does expose - and use that instead
> of a constant value? 

Nevermind - this would work even with 4 threads as we would hit the
number '2' before '4' and the key would be turned on/off properly.

Sorry for the noise.

Reviewed-by: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>

Thank you!
> 
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
> > Signed-off-by: Thomas Gleixner <t...@linutronix.de>
> > 
> > ---
> >  kernel/sched/core.c |   19 +++++++++++--------
> >  1 file changed, 11 insertions(+), 8 deletions(-)
> > 
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -5738,15 +5738,10 @@ int sched_cpu_activate(unsigned int cpu)
> >  
> >  #ifdef CONFIG_SCHED_SMT
> >     /*
> > -    * The sched_smt_present static key needs to be evaluated on every
> > -    * hotplug event because at boot time SMT might be disabled when
> > -    * the number of booted CPUs is limited.
> > -    *
> > -    * If then later a sibling gets hotplugged, then the key would stay
> > -    * off and SMT scheduling would never be functional.
> > +    * When going up, increment the number of cores with SMT present.
> >      */
> > -   if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
> > -           static_branch_enable_cpuslocked(&sched_smt_present);
> > +   if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> > +           static_branch_inc_cpuslocked(&sched_smt_present);
> >  #endif
> >     set_cpu_active(cpu, true);
> >  
> > @@ -5790,6 +5785,14 @@ int sched_cpu_deactivate(unsigned int cp
> >      */
> >     synchronize_rcu_mult(call_rcu, call_rcu_sched);
> >  
> > +#ifdef CONFIG_SCHED_SMT
> > +   /*
> > +    * When going down, decrement the number of cores with SMT present.
> > +    */
> > +   if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> > +           static_branch_dec_cpuslocked(&sched_smt_present);
> > +#endif
> > +
> >     if (!sched_smp_initialized)
> >             return 0;
> >  
> > 
> > 

Reply via email to