> On Power7 processors running in SMT4 mode with 2, 3, or 4 idle threads 
> there is performance benefit to idling the higher numbered threads in
> the core.  
> 
> This patch implements arch_scale_smt_power to dynamically update smt
> thread power in these idle cases in order to prefer threads 0,1 over
> threads 2,3 within a core.
> 
> Signed-off-by: Joel Schopp <jsch...@austin.ibm.com>
> ---
> Index: linux-2.6.git/arch/powerpc/kernel/smp.c
> ===================================================================
> --- linux-2.6.git.orig/arch/powerpc/kernel/smp.c
> +++ linux-2.6.git/arch/powerpc/kernel/smp.c
> @@ -617,3 +617,44 @@ void __cpu_die(unsigned int cpu)
>               smp_ops->cpu_die(cpu);
>  }
>  #endif
> +
> +static inline int thread_in_smt4core(int x)
> +{
> +  return x % 4;
> +}
> +unsigned long arch_scale_smt_power(struct sched_domain *sd, int cpu)
> +{
> +  int cpu2;
> +  int idle_count = 0;
> +
> +  struct cpumask *cpu_map = sched_domain_span(sd);
> +
> +     unsigned long weight = cpumask_weight(cpu_map);
> +     unsigned long smt_gain = sd->smt_gain;
> +
> +     if (cpu_has_feature(CPU_FTRS_POWER7) && weight == 4) {

I think we should avoid using cpu_has_feature like this.  It's better to
create a new feature and add it to POWER7 in the cputable, then check
for that here.

The way that it is now, I think any CPU that has superset of the POWER7
features, will be true here.  This is not what we want.

> +             for_each_cpu(cpu2, cpu_map) {
> +                     if (idle_cpu(cpu2))
> +                             idle_count++;
> +             }
> +
> +             /* the following section attempts to tweak cpu power based
> +              * on current idleness of the threads dynamically at runtime
> +              */
> +             if (idle_count == 2 || idle_count == 3 || idle_count == 4) {
> +                     if (thread_in_smt4core(cpu) == 0 ||
> +                         thread_in_smt4core(cpu) == 1) {
> +                             /* add 75 % to thread power */
> +                             smt_gain += (smt_gain >> 1) + (smt_gain >> 2);
> +                     } else {
> +                              /* subtract 75 % to thread power */
> +                             smt_gain = smt_gain >> 2;
> +                     }
> +             }
> +     }
> +     /* default smt gain is 1178, weight is # of SMT threads */
> +     smt_gain /= weight;

This results in a PPC div, when most of the time it's going to be a
power of two divide.  You've optimised the divides a few lines above
this, but not this one.  Some consistency would be good.

Mikey

> +
> +     return smt_gain;
> +
> +}
> Index: linux-2.6.git/kernel/sched_features.h
> ===================================================================
> --- linux-2.6.git.orig/kernel/sched_features.h
> +++ linux-2.6.git/kernel/sched_features.h
> @@ -107,7 +107,7 @@ SCHED_FEAT(CACHE_HOT_BUDDY, 1)
>  /*
>   * Use arch dependent cpu power functions
>   */
> -SCHED_FEAT(ARCH_POWER, 0)
> +SCHED_FEAT(ARCH_POWER, 1)
>  
>  SCHED_FEAT(HRTICK, 0)
>  SCHED_FEAT(DOUBLE_TICK, 0)
> 
> 
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
> 
_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to